Packed with valuable information, our publications help you stay in touch with the latest developments in the fields of law affecting you, whatever your sector of activity. Our professionals are committed to keeping you informed of breaking legal news through their analysis of recent judgments, amendments, laws, and regulations.
Publications
-
Artificial intelligence: is your data well protected across borders?
Cross-border deals are always challenging, but when related to AI technologies, such deas additionally involve substantial variations in terms of the rights granted in each jurisdiction. Looking at cross-border deals about Artificial Intelligence technologies therefore requires a careful analysis of these variations in order to properly assess the risks, but also to seize all available opportunities. Many AI technologies are based on neural networks and rely on large amounts of data to train the networks. The value of these technologies relies mostly on the ability to protect the intellectual property related to these technologies, which may lie, in some cases, in the innovative approach of such technology, in the work performed by the AI system itself and in the data required to train the system. Patents Given the pace of the developments in Artificial Intelligence, when a transaction is being negotiated, we are often working with patent applications, well before any patent is granted. That means we often have to assess whether or not these patent applications have any chance of being granted in different countries. Contrary to patent applications on more conventional technologies, in AI technologies one cannot take it for granted that an application that is acceptable in one country will lead to a patent in other countries. If we look at the US, the Alice1 decision of a few years ago had a major impact, resulting in many Artificial Intelligence applications being difficult to patent. Some issued AI-related patents have been declared invalid on the basis of this case. However, it is obvious from the patent applications that are now public that several large companies keep filing patent applications for AI-related technologies, and some of them are getting granted. Just across the border up north, in Canada, the situation is more nuanced. A few years ago, the courts said in the Amazon2 decision that computer implementations could be an essential element of a valid patent. We are still hoping for some specific decision on AI systems. In Europe, Article 52 of the European Patent Convention excludes "programs for computers". However, a patent may be granted if a “technical problem” is resolved by a non-obvious method3. There may be some limited potential for patents on Artificial Intelligence technologies there. The recently updated Guidelines for Examination of patent applications related to AI and machine learning), while warning that expressions such as "support vector machine", "reasoning engine" or "neural network" trigger a caution flag as typically referring to abstract models devoid of technical character, point out that applications of IA and ML do make technical contributions that are patentable, such as for example: The use of a neural network in a heart-monitoring apparatus for the purpose of identifying irregular heartbeats; or The classification of digital images, videos, audio or speech signals based on low-level features, such as for example edges or pixel attributes for images In contrast, classifying text documents solely based on their textual content is cited as not being regarded to be a technical purpose per se, but a linguistic one (T 1358/09). Classifying abstract data records or even "telecommunication network data records" without any indication of a technical use being made of the resulting classification is also given as an example of failing to be a technical purpose, even if the classification algorithm may be considered to have valuable mathematical properties such as robustness (T 1784/06). In Japan, according to examination guidelines, software-related patents can be granted for inventions “concretely realizing the information processing performed by the software by using hardware resources”4. It may be easier to get a patent on an AI system there. As you can appreciate, you may end up with variable results from country to country. Several industry giants, such as Google, Microsoft, IBM and Amazon keep filing applications for Artificial Intelligence and AI-related technologies. It remains to be seen how many, and which, will be granted, and ultimately which will be upheld in court. The best strategy for now may be to file applications for novel and non-obvious inventions with a sufficient level of technical detail and examples of concrete applications, in the event case law evolves such that Artificial Intelligence patents are indeed valid a few years down the road, at least in some countries. Judicial exceptions remain: Mathematical Concepts: mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviours; business relations); managing personal behaviour or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes: concepts performed in the human mind (including an observation, evaluation, judgment, opinion). Take-home message: patent applications on AI technology should identify a technical problem, provide a detailed technical description of specific implementations of the innovation that solve or mitigate the technical problem, and give examples of possible outcomes have a greater hope of getting allowed into a stronger patent. Setting the innovation within a specific industry or as related to specific circumstances and explaining the advantages over known existing systems and methods contributes to overcoming subject matter eligibility issues. Copyright From the copyright standpoint, we have also some difficulties, especially for the work created by an AI system. Copyright may protect original Artificial Intelligence software if it consists of “literary works” under the Copyright Act, including: computer source code, interface elements, a set of methods of communication for a database system, a web-based system, an operating system, or a software library. Copyright can cover data in a database if it complies with the definition of a compilation, thereby protecting the collection and assembling of data or other materials. There are two main difficulties in the recognition of copyright protection in AI creation: one relates to the machine-generated work that does not involve the input of human skill and judgment and the second concerns the concept of an author, which does not specifically exclude machine work but may eliminate it indirectly by way of section 5 of the Copyright Act, which indicates that copyright shall subsist in Canada in original work where the author was a citizen or resident of a treaty country at the time of creation of the work. Recently, we have seen Artificial Intelligence systems creating visual art and music. The artistic value of these creations may be disputed. However, the commercial value can be significant, for example if an AI creates the soundtrack to a movie. There are major research projects involving the use of AI technologies to write source code for some specific applications, for example in the gaming industry. Some jurisdictions do not provide copyright protection to work created by machines, like the US and Canada. In Canada, some recent case law specifically stated that for a work to be protected under the Copyright Act, you need a human author5. In the US, some may remember Naruto, the monkey that took a selfie. In the end, there was no copyright in the picture. While we are not sure how this will translate for Artificial Intelligence at this point, it is difficult to foresee that an AI system would have any such right if a monkey has none. Meanwhile, other countries, such as the UK, New Zealand and Ireland, have legal provisions whereby the programmer of the Artificial Intelligence technology will likely be the owner of the work created by the computer. These changes were not specifically made with AI in mind, but it is likely that the broad language that was used will apply. For example, in the UK, copyright is granted to “the person by whom the arrangements necessary for the creation of the work are undertaken”6. The work created by the system may have no protection at all in Canada, the US and several other jurisdictions, but be protected by copyrights in other places, at least until Canada and the US decide to address this issue by legislative changes. Trade secrets Trade secret protection covers any information that is secret and not part of the public domain. In order for it to remain confidential, a person must take measures, such as obtaining undertakings from third parties not to divulge the information. There are no time limits for this type of protection, and protection can be sought for machine-generated information. Data privacy Looking at data privacy, some legal scholars have mentioned that, if construed literally, the European GDPR are difficult to reconcile with some AI technologies. We just have to think about the right to erasure and the requirement for lawful processing (or lack of discrimination), which may be difficult to implement7. If we look into neural networks, they typically learn from datasets created by humans or by human training. Therefore, these networks often end up with the same bias as the persons who trained them, and sometimes with even more bias because what neural networks do is to find patterns. They may end up finding a pattern and optimizing a situation from a mathematical perspective while having some unacceptable racial or sexist bias, because they do not have “human” values. Furthermore, there are challenges when working on smaller datasets that allow reversing the “learning” process of the Artificial Intelligence, as it may lead to privacy leaks and trigger the right to remove specific data from the training of the neural network, which itself is technically difficult. One also has to take into account laws and regulations that are specific to some industries, for example HIIPA compliance in the US for health records, which includes privacy rules and technical safeguards8. Laws and regulations must be reconciled with local policies, such as those decided by government agencies and which need to be met in order to have access to some government data; for example, to access electronic health records in the Province of Quebec’s, where the authors are based. One of the challenges, in such cases, is to come up with practical solutions that comply with all applicable laws and regulations. In many cases, one will end up creating parallel systems if the technical requirements are not compatible from one country to another. Alice Corp. v. CLS Bank International, 573 U.S., 134 S. Ct. 2347 (2014) Canada (Attorney General) v. Amazon.com, Inc., 2011 FCA 328 T 0469/03 (Clipboard formats VI/MICROSOFT) of 24.2.2006, European Patent Office, Boards of Appeal, 24 February 2006. Examination Guidelines for Invention for Specific Fields (Computer-Related Inventions), Japanese Patent Office, April 2005. Geophysical Service Incorporated v Encana Corporation, 2016 ABQB 230; 2017 ABCA 125; 2017 CanLII 80435 (SCC). Copyright, Designs and Patents Act, 1988, c. 48, § 9(3) (U.K.); see also Copyright Act 1994, § 5 (N.Z.); Copyright and Related Rights Act, 2000, Part I, § 2 (Act. No. 28/2000) (Irl.). General Data Protection Regulation, (EU) 2016/679, Art. 9 and 17. Health Insurance Portability and Accountability Act of 1996
-
Open innovation: A shift to new intellectual property models?
“The value of an idea lies in the using of it.” This was said by Thomas Edison, known as one of the most outstanding inventors of the last century. Though he fervently used intellectual property protections and filed more than 1,000 patents in his lifetime, Edison understood the importance of using his external contacts to foster innovation and pave the way for his inventions to yield their full potential. In particular, he worked with a network of experts to develop the first direct current electrical circuit, without which his light bulb invention would have been virtually useless. Open innovation refers to a mode of innovation that bucks the traditional research and development process, which normally takes place in secrecy within a company. A company that innovates openly will entrust part of the R&D processes for its products or services, or its research work, to external stakeholders, such as suppliers, customers, universities, competitors, etc. A more academic definition of open innovation, developed by Professor Henry Chesbrough at UC Berkeley, reads as follows: “Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively.”1 Possible approaches: collaboration vs. competition A company wishing to use open innovation will have to decide which innovation "ecosystem" to join: should it favour membership in a collaborative community or a competitive market? Joining a collaborative community In this case, intellectual property protections are limited and the object is more focused on developing knowledge through sharing. Many IT companies or consortia of universities join together in collaborative groups to develop skills and knowledge with a view to pursuing a common research goal. Joining a competitive market In this case, intellectual property protections are robust and there is hardly any exchange of information. The ultimate goal is profit maximization. Unlike the collaborative approach, relationships translate into exclusivity agreements, technology sales and licensing. This competitive approach is particularly pervasive in the field of video games, for example. Ownership of intellectual property rights as a requisite condition to use open innovation The success of open innovation lies primarily in the notion that sharing knowledge can be profitable. Secondly, a company has to strike a balance between what it can reveal to those involved (suppliers, competitors, specialized third-party companies, the public, etc.) and what it can gain from its relationships with them. It also has to anticipate its partners’ actions in order to control its risks before engaging in information sharing. At first glance, resorting to open innovation may seem to be an imprudent use of intellectual property assets. Intellectual property rights generally involve a monopoly attributed to the owner, allowing it to prevent third parties from copying the protected technology. However, studies have shown that the imitation of a technology by a competitor can be beneficial.2 Other research has also shown that a market with strong intellectual property protections increases the momentum of technological advances.3 Ownership of intellectual property rights is therefore a prerequisite for any company that innovates or wants to innovate openly. Because open innovation methods bring companies to rethink their R&D strategies, they also have to manage their intellectual property portfolios differently. However, a company has to keep in mind that it must properly manage its relations with the various external stakeholders it plans to do business with in order to avoid unwanted distribution of confidential information relating to its intellectual property, and, in turn, profit from this innovation method without giving up its rights. Where does one get innovation? In an open innovation approach, intellectual property can be brought into a company from an external source, or the transfer can occur the other way around. In the first scenario, a company will reduce its control over its research and development process and go elsewhere for intellectual property or expertise that it does not have in-house. In such a case, the product innovation process can be considerably accelerated by the contributions made by external partners, and can result in: The integration of technologies from specialized third-party partners into the product under development; The forging of strategic partnerships; The granting of licences to use a technology belonging to a third-party competitor or supplier to the company; The search for external ideas (research partnerships, consortia, idea competitions, etc.). In the second scenario, a company will make its intellectual property available to stakeholders in its external environment, particularly through licensing agreements with strategic partners or secondary market players. In this case, a company can even go so far as to make one of its technologies public, for example by publishing the code of software under an open-source license, or even assign its intellectual property rights for a technology that it owns, but for which it has no use. Some examples Examples of open innovation success stories are many. For example, Google made its automated learning tool Tensorflow available to the public under an open-source license (Apache 2.0) in 2015. As a result, Google allowed third-party developers to use and modify its technology’s code under the terms of the license while controlling the risk: any interesting discovery made externally could quickly be turned into a product by Google. This strategy, common in the IT field, has made it possible for the market to benefit from interesting technology and Google to position itself as a major player in the field of artificial intelligence. The example of SoftSoap liquid soap illustrates the ingenuity of American entrepreneur Robert Taylor, who developed and marketed his product without strong intellectual property protection by relying on external suppliers. In 1978, Taylor was the first to think of bottling liquid soap. In order for his invention to be feasible, he had to purchase plastic pumps from external manufacturers because his company had no expertise in manufacturing this component. These pumps were indispensable, because they had to be screwed onto the bottles to pump the soap. At that time, the patent on liquid soap had already been filed and Mr. Taylor’s invention could not be patented. To prevent his competitors from copying his invention, Taylor placed a $12 million order with the two sole plastic pump manufacturers. This had the effect of saturating the market for nearly 18 months, giving Mr. Taylor an edge over his competitors who were then unable to compete because of the lack of availability of soap pumps from manufacturers. ARM processors are a good example of the use of open innovation in a context of maximizing intellectual property. ARM Ltd. benefited from reduced control over the development and manufacturing process of tech giants such as Samsung and Apple, which are increasingly integrating externally developed technologies into their products. The particularity of ARM processors lies in their marketing method: ARM Ltd. does not sell its processors as finished processors fused in silicon. Rather, it grants licenses to independent manufacturers for them to use the architecture it has developed. This makes ARM Ltd. different from other processor manufacturers and has allowed it to gain a foothold in the IT parts supplier market, offering a highly flexible technology that can be adapted to various needs depending on the type of product (phone, tablet, calculator, etc.) in which the processor will be integrated. Conclusion The use of open innovation can help a company significantly accelerate its research and development process while limiting costs, either by using the intellectual property of others or sharing its own intellectual property. Although there is no magic formula, it is certain that to succeed in an open innovation process, a company must have a clear understanding of the competitors and partners it plans to collaborate with and manage its relations with its partners accordingly, so as to not jeopardize its intellectual property. Henry Chesbrough, Win Vanhaverbeke and Joel West, Open Innovation: Researching a New Paradigm, Oxford University Press, 2006, p. 1 Silvana Krasteva, "Imperfect Patent Protection and Innovation," Department of Economics, Texas A&M University, December 23, 2012. Jennifer F. Reinganum, "A Dynamic Game of R and D: Patent Protection and Competitive Behavior,” Econometrica, The Econometric Society, Vol. 50, No. 3, May, 1982; Ryo Horii and Tatsuro Iwaisako, “Economic Growth with Imperfect Protection of Intellectual Property Rights,” Discussion Papers In Economics And Business, Graduate School of Economics and Osaka School of International Public Policy (OSIPP), Osaka University, Toyonaka, Osaka 560-0043, Japan.
-
Artificial intelligence at the lawyer’s service: is the dawn of the robot lawyer upon us?
Over the past few months, our Legal Lab on Artificial Intelligence (L3AI) team has tested a number of legal solutions that incorporate AI to a greater or lesser extent. According to the authors Remus and Levy1, most of these tools will have a moderate potential impact on the legal practice. Among the solutions tested by the members of our laboratory, certain functionalities in particular drew our attention. Historic context At the start of the 1950s, when Grace Murray Hopper, a pioneer of computer science, attempted to convince her colleagues to create a computer language using English words, she was told that it was impossible for a computer to be able to understand English. However, contrary to the engineers and mathematicians of the time, the business world was more receptive to the idea. Thus was born “Business Language version 0”, or B-0, the forerunner of a number of more modern computer languages and a first (small) step towards the processing of natural language. The fact remains that the use of IT for legal solutions was a challenge, specifically because of the nature of the information to be processed, which was often presented in text format and was not very organized. In 1986, author Richard Susskind was already addressing the use of artificial intelligence to process legal information2. It was not until recently, however, with advances in the natural language processing field, that we have seen the creation of software applications with the potential to substantially modify the practice of law. A number of lawyers and notaries are now concerned about the future of their profession. Are we witnessing the creation of the robot lawyer? Currently, the technological solutions available to legal practitioners make it possible to automate certain specific aspects related to the multitude of tasks they fulfill when they are doing their work. The tools for automating and analyzing documents are relevant examples in that they make it possible, on the one hand, to create legal documents from an existing model and, on the other, to identify certain elements that may be potentially problematic in the submitted documents. However, no solution can claim to completely replace the legal practitioner. Recently, the above-mentioned authors Remus and Levy have analyzed and measured the impact of automation on the work of lawyers3. Generally speaking, they predict that only the document research process will be disrupted significantly by automation and that the tasks of managing files, drafting documents, conducting due diligence reviews and research and legal analysis will be slightly impacted. Moreover, they feel that the tasks of document management, legal drafting, consulting, negotiating, collating facts, preparation and representation before the court will only be slightly impacted by solutions integrating artificial intelligence4. Documentary analysis toolsKira, Diligen, Luminance, Contract Companion, LegalSifter, LawGeex, etc. First, among the tools making it possible to conduct documentary analysis, there are two types of solutions offered on the market. On the one hand, several use supervised and unsupervised learning techniques to sort and analyze a vast number of documents in order to draw certain specific information from them. This type of tool is particularly interesting in the context of a due diligence review. It makes it possible in particular to identify the object of a given contract as well as certain clauses, the applicable laws and other set items in order to detect certain elements of risk determined beforehand by the user. In this case, we could for example cite the existence of due diligence tools such as Kira, Diligen and Luminance5. On the other hand, certain solutions are designed to analyze and review contracts to facilitate negotiations with a third party. This type of tool uses natural language processing (NLP) in order to identify the specific terms and clauses of a contract. It also identifies the missing elements in a specific type of contract. For example, in a confidentiality agreement, the tool will notify the user if the concept of confidential information is not defined. Moreover, it provides comments regarding the various elements identified in order to provide guidance on negotiating the terms of the contract. These comments and guidelines can be modified based on the attorney’s preferred practices. These solutions are particularly useful when a legal professional is called on to advise a client on whether or not to comply with the terms of a contract tabled by a third party. The Contract Companion6 tool drew our attention because of the ease of use it provides, even if it is a tool that merely serves to assist a human drafting a contract without identifying problematic clauses and their content. Instead, it detects inconsistencies such as a missing definition for a capitalized term, among other examples. LegalSifter and LawGeex7 are presented as assistants to the negotiation process by proposing solutions that identify discrepancies between a submitted contract and the best practices favoured by the firm or company, thereby helping to outline and resolve any missing or problematic clauses. Legal research tools InnovationQ, NLPatent, etc. Recently, certain solutions that made it possible to conduct legal research and predict the outcome of court decisions have appeared on the market. Some companies propose simulating a ruling based on factual elements outlined in the context of a given legal system to help with the decision-making process. Accordingly, they make use of NLP to understand the questions asked by attorneys and to research the legislation, case law and doctrinal sources. Some of the solutions even make proposals to lawyers to determine their chances of winning or losing based on the given elements, such as the opposing party’s lawyer, the judge and the administrative level of the court. To do so, the tool uses machine learning. It asks questions about the client’s situation and then goes on to analyze thousands of similar cases upon which the courts have already passed judgment. Lastly, the artificial intelligence system formulates a prediction based on all of the cases analyzed, a personalized explanation and a list of relevant case law. With the advent of these tools, authors are anticipating significant changes in the types of lawsuits that will be brought before the courts. They predict that technology will enable the settlement of disputes and that judges will only have to rule on matters that give rise to the most complex of legal questions and that require concrete legal developments.8 In patent law, the search for existing inventions (“prior art” in the intellectual property lexicon) is facilitated by tools that call on NLP. Patent application drafting usually comprises a specialized vocabulary. The solutions make it possible to identify the target technology, determine the relevant prior art and analyze the related documents so as to identify the disclosed elements. In this regard, the InnovationQ and NLPatent9 tools seem to demonstrate interesting potential. Legal drafting toolsSpecif.io, etc. Some of the solutions available on the market call on the “creative” potential of artificial intelligence applied to the legal field. Among these, we are interested in a solution that is capable of drafting a specification in the context of a patent application. The Specif.io10 tool makes it possible to draft a description of the invention using vocabulary suited to the form required to draft patent applications, which is based on claims that briefly outline the scope of the invention. For the time being, this solution is restricted to the field of software developments. Even if, most of the time and given the current stage of the product, the lawyer is called on to rework the text significantly, he or she can save a considerable amount of time when composing a first draft. Recommendations In conclusion, artificial intelligence tools are not all progressing in the same manner in every area of the law. A number of tools can already assist attorneys with various repetitive tasks or help them identify errors or potential risks in different documents. However, it is important to consider that such tools are still far off from having the human faculty of being able to contextualize their operations. In those cases where the information is organized and structured, such as in matters pertaining to patent applications, a domain in which databases are organized and accessible online for most Western nations, the automated tools make it possible to not only assist users in completing their tasks, but even to provide a first draft of a specification based on simple draft claims. However, research and development are still needed in this regard before we can truly rely on such solutions. Therefore, we feel it relevant to issue certain key recommendations to those attorneys seeking to integrate such AI tools into their everyday practice: Be aware of the possibilities and limits of an AI tool: when selecting an AI tool, it is important to run tests on it so as to assess its operational aspects and results. One must set a specific objective and ensure that the tool being tested can help achieve this objective. Human supervisions: to date, it is important for any AI tool to still be used with human supervision. This is not only an ethical obligation to ensure the quality of the services rendered, but also a simple rule of caution when using tools that do not have the capacity to contextualize the information submitted to them. Processing of ambiguities: several AI tools make it possible to vary their operational settings. Such setting variations make it so that the processing of any ambiguous situation is entrusted to the humans operating the AI tools. Data confidentiality: Remember that we are bound to uphold the confidentiality of the data being processed! The processing of confidential information by solutions providers is a critical challenge to consider. We should not be afraid to ask questions on this subject. Informed employees: Too often, artificial intelligence tends to frighten employees. Moreover, just as with any technological change, internal training is needed to ensure that the use of such tools complies with the company’s requirements. Thus, not only must the proper AI tools be selected, but the proper training must be provided in order to benefit from them. Remus, D., & Levy, F. (2017). Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law. Geo. J. Legal Ethics,30, 501. Susskind, R.E. (1986) Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning. The Modern Law Review, 49(2), 168-194. Supra, note 1. Id. kirasystems.com; diligen.com; luminance.com. https://www.litera.com/products/legal/contract-companion. legalsifter.com;lawgeex.com. Luis Millan, Artificial Intelligence, Canadian Lawyer (April 7, 2017), online: http://www.canadianlawyermag.com/author/sandra-shutt/artificial-intelligence-3585. http://ip.com/solutions/innovationq/; nlpatent.com. specif.io/index.
-
First pilot project on the use of autonomous vehicles comes into effect
The Autonomous Bus and Minibus Pilot Project 1 (the “Pilot Project”) came into effect in Quebec on August 16, 2018. The project provides guidelines for the regulated driving of the first autonomous vehicles on Quebec’s roads. Driving autonomous vehicles in quebec An autonomous vehicle is defined by the new Highway Safety Code as “a road vehicle equipped with an automated driving system that can operate a vehicle at driving automation level 3, 4 or 5 of the SAE International’s Standard J3016”.2 Driving autonomous vehicles is currently prohibited in Quebec other than in accordance with a pilot project.3 Eligibility requirements To be authorized by the Minister under the Pilot Project, a manufacturer, distributor or operator of autonomous vehicles (referred to by the Pilot Project as the “promoter”) must submit certain information to the Minister of Transport and to the Société de l’assurance automobile du Québec (“SAAQ”) concerning their experimental project, including, in particular: - an application specifying their project and the objectives pursued; - a description of the vehicles that will be used; - the area in which the project will be implemented; and - the safety measures proposed.4 Insurance and security Under the new Highway Safety Code, the Pilot Project provides that the promoter of a project must carry a minimum of $1,000,000 in liability insurance to guarantee compensation for material harm.5 In the event of an accident involving an autonomous vehicle operated under an experimental project, the SAAQ may recover the compensation it will be required to pay under the Automobile Insurance Act6 from the manufacturer or distributor of the autonomous vehicle involved in the accident. In that case, the operator of a project will have the obligation to reimburse the SAAQ for the compensation paid.7 Security must also be provided to the SAAQ to guarantee reimbursement, in an amount that will be determined by the Minister on a case by case basis, depending on the project. A manufacturer or distributor from which the SAAQ has made a claim for compensation paid may refuse to make reimbursement or request a reduction of the amount claimed in two situations: (1) by proving the fault of the victim or of a third person; or (2) in the case of superior force.8 Experimental project The entry into effect of the Pilot Project has authorized a first experimental project in Quebec, sponsored by Keolis Canada Innovation, s.e.c.9 The purpose of the project is to put Navya autonomous minibuses into service that are capable of transporting up to 15 passengers, travelling on a closed circuit in Candiac. The vehicles will travel at a maximum speed of 25 km/h and a driver will be on board to take control of the vehicle, if necessary.10 We can count on seeing a number of other projects in the future, now that there is a legislative framework allowing them. Autonomous Bus and Minibus Pilot Project, (Highway Safety Code, CQLR chapter C-24.2, s. 633.1).[ Pilot Project] Highway Safety Code, CQLR chapter C-24.2, s. 4. Highway Safety Code, CQLR chapter C-24.2, s. 492.8; except for vehicles at level 3, which may be driven if their sale is authorized in Canada. Pilot Project, s. 4. Pilot Project, s. 20. Automobile Insurance Act, CQLR c. A-25. Pilot Project, s. 21. Pilot Project, s. 22. Pilot Project, s. 26. “Une navette à L’essaie pour un an à Candiac”, La Presse, August 11, 2018, Montréal.
-
Dr. Robot at your service: artificial intelligence in healthcare
Artificial intelligence technologies are extremely promising in healthcare.1 By examining, cross-referencing and comparing a phenomenal amount of data,2 AI lets researchers work more quickly at a lower cost3 and facilitates doctors’ decision-making with regard to diagnosis, treatment and choice of prescription. The integration of AI into the healthcare field can take various forms:4 Management of electronic medical records (e.g., Omnimed) Direct patient care to improve decision-making with regard to diagnosis, prognosis and choice of treatment method Integration in the area of monitoring and medication (e.g., Dispill) The performance of robotic exams and surgeries Indirect patient care functions, such as: Optimization of workflow Better management of hospital inventory Home care applications, where portable devices and sensors would be used to assess and predict patient needs. Working to protect innovators, their clients and the public No matter what form AI takes when it is implemented into the healthcare field in Quebec, as with any innovation, we must adapt and work to protect the public, innovators and their clients. What is an innovator? An innovator is a developer, provider or distributor who is involved in the development and marketing of products that use artificial intelligence. 1 - Innovator protection As the future of healthcare lies in an increased integration of AI, innovators must be properly supported and protected, which means that they must be equipped with all of the appropriate tools for protecting their rights, especially intellectual property rights. At the time of product development: they must make sure that they obtain the necessary guarantees and commitments from their partners in order to be able to assert their rights in the event that their technology is appropriated by a third party. At the time of product marketing: having taken care to properly protect their rights, they will avoid prosecution or claims, whether for patent infringement or otherwise. In addition, if the proposed technological solution implies that the data collected, transmitted or analyzed is stored and pooled or that it is shared with other stakeholders, innovators must ensure in particular that the patients’ personal information is protected in accordance with the applicable laws and regulations5 and that this data is not used for commercial purposes. If not, an innovator could be the target of a claim by professional organizations or by patient groups and, when certification is required, that certification could be withdrawn by the Ministère de la Santé et des Services sociaux [health and human services ministry]. To learn more about innovator protection, we invite you to read the following article: Artificial intelligence: contractual obligations beyond the buzzwords. 2 - Protection of clients (buyers of artificial intelligence solutions) Artificial intelligence operations have several intrinsic limits, including the prioritization of quantity over quality of the data collected; systematic errors that are reproduced or amplified;6 and even human error in the entry of the data relied on by professionals and researchers. Accordingly, innovators must ensure that they properly warn their clients of the limits and risks tied to the use of their products in order to protect themselves against potential claims. They must therefore be objective in the way that they represent their products. For example, terms like “intelligent database” should be used rather than “diagnostic systems.” This word choice will avoid both potential civil liability claims and the possibility of being reprimanded for violating the Medical Act for performing functions reserved only for doctors.7 The innovator will also be required to enter into a contract with the client that is clear and detailed with regard to the use, access and sharing of data collected in electronic medical records (EMR). 3 - Protection of the public (Collège des médecins du Québec [“Quebec college of physicians”] regulation) All products using AI technology must allow doctors to respect their obligations with regard to creating and maintaining EMR. These obligations are included in Section 9 of the Collège des médecins draft regulation, which is expected to come into force in the near future and will make the use of EMR mandatory. The Collège also intends to specify in this regulation that collected data may not be used [TRANSLATION] “for any purpose other than to monitor and treat patients.”8 The Inquiries Division of the Collège has also recently cautioned its members that the technological tools that they use [TRANSLATION] “must be used exclusively within the context of their duties, meaning the administration of care.”9 The current position of the Collège des médecins and the Ministère de la Santé is that the marketing of data contained in EMR is prohibited even if the data is anonymous. Furthermore, according to Dr. Yves Robert, Secretary of the Collège, even if the shared data is anonymous, it may not be used either to promote a product, such as a less expensive medication in the case of an insurance company, or to influence a doctor’s choice when making a decision. 10 The Inquiries Division has also reminded members of their ethical obligation to “disregard any intervention by a third party which could influence the performance of their professional duties to the detriment of their patient, a group of individuals or a population.11” The use of Big Data would create more than $300 billion USD in value, with two-thirds of that amount coming from reduced expenditures. Big Data Analytics in Healthcare, BioMed Research International, vol. 2015, Article ID 370194; see also Top health industry issues of 2018, PwC Health Research Institute, p. 29. The American consortium Kaiser Permanente holds around 30 petabytes of data, or 30 million gigabytes, and collects 2 terabytes daily. Mining Electronic Records for Revealing Health Data, New York Times, January 14, 2013. For examples of the integration of AI in healthcare in Canada, see Challenge Ahead: Integrating Robotics, Artificial Intelligence and 3D Printing Technologies into Canada’s Healthcare Systems , October 2017. See in particular S. 20 of the Code of ethics of physicians, CQLR c. M-9, r. 17 and the Act respecting the protection of personal information in the private sector, CQLR c P-39. See When artificial intelligence is discriminatory. Medical Act, CQLR c. M-9, s. 31. Id., S. 9, par. 9. L’accès au dossier médical électronique : exclusivement pour un usage professionnel [“Access to medical records: exclusively for professional use”], Inquiries Division of the Collège des médecins du Québec, February 13, 2018. Marie-Claude Malboeuf, “Dossiers médicaux à vendre ” [“Medical records for sale”], La Presse.ca, March 2, 2018. Accès au dossier médical électronique par les fournisseurs [“Access to electronic medical records by providers”], Inquiries Division of the Collège des médecins du Québec, May 29, 2017, citing section 64 of the Code of Ethics of Physicians, supra, note 12.
-
Ars Ex Machina: Artificial Intelligence, the artist
Similarly to human beings, machines are now capable of creating. They can write poetry, compose symphonies and even paint canvasses. They can also take photographs without any human assistance and perform musical pieces with flexibility and expression. On the technical front, such works and performances are successful to the point of confusing numerous aficionados, who are unable to tell the difference between a work created by humans and something generated by their artificial counterparts. However, with regards to artistic merit, the quality of the artificially-generated work is often criticized. For legal experts, the question arises as to whether these works meet all of the criteria for recognition of copyright. The matter of copyright in Canada Copyright law is the exclusive right to produce, reproduce, sell, licence, publish or perform a work or a major part thereof, whether it be literary, artistic, dramatic or musical1. In Canadian law, to be subject to copyright law, a work must be qualified as being an original creation; it must be the product of an author’s exercise of skill and judgement2. Even though it is difficult to confirm whether a computer can demonstrate skill and judgement, the definition proposed by the Supreme Court clarifies the two aptitudes that well describe the task performed by the computer when it creates works of art. Incidentally, creative nature is never considered as part of the concept of originality: a work needs to be neither novel nor unique. Process of Artistic Creation of an Intelligent System Any creation made by an artificial intelligence system draws its origin from one or more algorithms, that is to say, a series of mathematical operations is performed in order to obtain a result. Such work may be qualified as being new, provided that it does not reproduce an existing work. However, it often puts forth a mechanical nature that hinders its assimilation into being considered a real work of art. Works generated by a computer in an autonomous manner are usually less eclectic than those generated by their human counterparts3. A system can, for instance, after having been exposed to a vast quantity of Mozart’s symphonies and having acquired the necessary musical theory, generate musical works similar to those of Mozart. Even if they may be criticized from an artistic innovation standpoint, such works meet the originality criteria in the legal sense since they appeal to a certain acquired aptitude (talent) and to evaluation of various possible options (judgment). Composing a poem in the style of Verlaine or a Beethoven-like symphony may ultimately lead, according to these criteria, to the recognition of a copyright. The Performer Robot The Copyright Act4 also provides protection on the performers’ rights in their performance of a given work5. For a number of years now, computer programs have been able to “play” musical pieces autonomously. Recently, the quality of the performances of these programs has improved considerably and they demonstrate a subtlety and flexibility that was previously lacking. For example, the Swiss firm ABB developed YuMi, the robot orchestra conductor capable of conducting an orchestra of human musicians and following the vocalises of a solo tenor6. Closer to us, the interactive virtual singer Maya Kodes was created by Neweb.tv, a Montréal-based firm. On stage, Maya sings and interacts with a group of back-up musicians and dancers7. This presents a plethora of advantages for film producers, impresarios, video game creators or advertisers who, thanks to such technological innovations, may henceforth generate original scores after having selected certain parameters, such as genre, ambience and duration, without having to pay for the licence to the rights held by various copyright holders to this music such as the composer, the creator and the performer. Who holds the copyrights? Elsewhere in the world The U.S. Copyright Office has issued a specific set of regulations requiring that copyright holders be human beings8. Works produced by a machine or another mechanical process that operates in a random or automatic manner are not, according to these regulations, eligible to be covered by copyright without there being creative involvement from a human being9. Thus, it appears that these provisions give rise to a grey zone, since the law has not been adjusted accordingly. Some jurisdictions, such as Australia10, have established that copyright law is closely related to a human being. Others have created a legal fiction whereby the creator of the computer program is considered as being the copyright holder. This is true in the United Kingdom, Ireland and New Zealand11. The latter solution is subject of criticism whereby the proposed legal fiction makes light of the legal complexities related to creating a computer program. In fact, the distance between the author of the program and the work ultimately created may prove significant12. It is possible that an artificial intelligence program creates something that is completely unexpected and undesired by the person who developed the program13. The humans behind the artificial intelligence system are not themselves the authors of the underlying message of the literary work or the melody resulting from the music composed. In Canada In the United States, an author proposes that the work produced by a machine be considered as a work produced by an employee hired to create or perform works that fall within the scope of the United States Copyright Act14. The concept of the work made for hire also exists in the Copyright Act in Canada, with certain technical nuances15. According to this idea, the programmer or person who orders the work of the programmer he or she employs becomes the holder of the economic rights tied to the work, that is to say, the rights related to marketing the work. This solution evacuates the notion of moral rights, that is, the right, for the author, to preserve the integrity of his or her work as well as the right to invoke, even under a pseudonym, creation of the work or even the right to remain anonymous16. Since these rights cannot be assigned, it is difficult to foresee that the solution proposed by the purported author be viable under Canadian law. In conclusion, the introduction of a new legal regime adapted to artistic creations produced by artificial intelligence systems is perceived as being necessary by many with respect to the works and the copyrights therein. For the time being, since the matter has yet to go before the courts, the foreseeable solutions are divided into two camps. On the one hand, we can recognize the copyright of the person who created the artificial intelligence that produced the work. On the other hand, if we do not succeed in binding the copyright to neither the programmer nor the machine, there is a risk that the work will fall into the public domain and thereby lose its economic value. One thing is certain: the desired legal regime must consider the rights of programmers behind the system with respect to the work ultimately produced and the level of control that such individuals may have over the content subsequently produced. Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. The Copyright Act, R.S.C. 1985, c. C-42, articles 3, 15, 18 The Supreme Court defines talent as “the use of one’s knowledge, developed aptitude or practised ability in producing the work.” It describes judgment as “one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options in producing the work”. CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 SCC 13 Bridy, A. (2012) Coding creativity: copyright of the artificially intelligent author. Stan. Tech. L. Rev., 1. RSC 1985, c C-42 Id., art. 15. YuMi the robot conducts Verdi with Italian Orchestra, Reuters, September 13, 2017, https://www.reuters.com/article/us-italy-concert-robot/yumi-the-robot-conducts-verdi-with-italian-orchestra-idUSKCN1BO0V2. Kirstin Falcao, Montreal developers create 1st interactive holographic pop star, CBC News, November 2, 2016, http://www.cbc.ca/news/canada/montreal/maya-kodes-virtual-singer-1.3833750. U.S. Copyright Office, Compendium of U.S. Copyright Office Practices306 (3d ed. 2017). Id., 313.2 Acohs Pty Ltd v Ucorp Pty Ltd (2012) FCAFC 16. Copyright, Designs and Patents Act, 1988 c. 48 9(3) (.U.K.); Copyright Act 1994, 5 (N.Z.); Copyright and Related Rights Act, 2000, Part I, 2 (Act. No. 28/2000). Supra, note 3. Wagner, J. (2017). Rise of the Artificial Intelligence Author, The Advocate, 75, 527. Supra, note 3. Article 13(3) of the Copyright Act establishes this specific legal regime and distinguishes between an employment contract and a contract related to a journalistic contribution. Supra, note 4, art. 14.1(1).
-
Autonomous cars in Quebec: the legal uncertainty is clarified at last
With the enactment on April 17th 2018 of Bill 165, An Act to amend the Highway Safety Code and other provisions1, the driving of autonomous vehicles in Quebec is finally regulated, although a number of uncertainties remain. Indeed, the driving of autonomous vehicles of automation level 3, such as Tesla’s model X equipped with an improved guidance system, is now permitted in Quebec. While driving vehicles of levels 4 and 5 is not allowed for the moment, we can anticipate that it will be permitted as part of a pilot project implemented by the government, since it has expressed its desire for Quebec to become a recognized leader in certain segments of the electric and smart vehicle industry.2 As a reminder, there are six levels of automation for cars: Level 0 – no automation; Level 1 – driver assistance; Level 2 – partial automation, which provides automatic assistance and acceleration/braking functions but requires that the human driver retain control over all dynamic driving tasks; Level 3 – conditional automation, in which dynamic driving tasks are performed by the control system but the human driver must remain available at all times; Level 4 – high automation, when a vehicle’s control system provides total control of all driving tasks, even in critical safety situations; and Level 5 – full automation, when a vehicle performs all driving tasks alone, without the possibility of human intervention. THE “OLD” HIGHWAY SAFETY CODE Until recently, the Highway Safety Code3 (hereinafter the “Code”) contained no definition of an autonomous vehicle. It defined a road vehicle as “a motor vehicle that can be driven on a highway” and a motor vehicle as “a motorized road vehicle primarily adapted for the transportation of persons or property”.4 Those broad definitions, and the fact that there was no specific definition of an autonomous vehicle, created a legal uncertainty. Were autonomous vehicles allowed on roads in Quebec? What would happen in the event of an accident involving an autonomous vehicle? The Transportation Ministry recognized this legal vagueness and introduced amendments to the Code relating to autonomous vehicles, among other things. THE “NEW” HIGHWAY SAFETY CODE The Code now defines an autonomous vehicle as “a road vehicle equipped with an automated driving system that can operate a vehicle at driving automation level 3, 4 or 5 of the SAE International’s Standard J3016”.5 ). The Code prohibits driving autonomous vehicles on roads in Quebec, other than vehicles at automation level 3, when they are authorized for sale in Canada.6 However, the Ministry may implement pilot projects relating to autonomous vehicles, “to study, test or innovate”.7 Pilot projects will last for five years and may also “provide for an exemption from the insurance contribution associated with the authorization to operate a vehicle and set the minimum required amount of liability insurance guaranteeing compensation for property damage caused by an automobile”8. On the question of liability in the event of an accident involving an autonomous vehicle, a pilot project may “require the manufacturer or distributor to reimburse the Société [de l’assurance automobile du Québec] for compensation that it will be required to pay in the event of an automobile accident”9. IMPLICATIONS AND UNCERTAINTIES While Transportation Minister André Fortin maintains that Bill 165 is forward-looking and is confident that it will further improve Quebec’s road safety record,10 uncertainties still surround the conditions that will be placed on projects involving cars of automation levels 4 and 5. Also, the obligations of the drivers and manufacturers of autonomous vehicles towards liability insurance will have to be clarified. A more specific framework for autonomous vehicle manufacturers’ liability will necessarily have to be put in place. The Quebec government will have no choice but to keep doubling its efforts to ensure that pilot projects are proposed if it is to catch up to Ontario, which has had an autonomous vehicle pilot project in place since 2016.11 Bill 165, An Act to amend the Highway Safety Code and other provisions; The sanction date of the Bill and the entry into force of the new dispositions are not yet known. Gouvernement du Québec, ministère de l’Économie, de la Science et de l’Innovation, “Le gouvernement du Québec soutient la Grappe industrielle des véhicules électriques et intelligents”, Montréal, April 13, 2018, online. Highway Safety Code, RLRQ, c C-24.2. Highway Safety Code, RLRQ, c C-24.2, art 4. Bill 165, An Act to amend the Highway Safety Code and other provisions, s. 4. Bill 165, An Act to amend the Highway Safety Code and other provisions, s. 125 (addition of section 492.8 to the Highway Safety Code). Bill 165, An Act to amend the Highway Safety Code and other provisions, s. 164 (amendment of section 633.1 of the Highway Safety Code). Bill 165, An Act to amend the Highway Safety Code and other provisions, s. 164 (amendment of section 633.1 of the Highway Safety Code). Bill 165, An Act to amend the Highway Safety Code and other provisions, s. 164 (amendment of section 633.1 of the Highway Safety Code). Journal des débats of the National Assembly, Vol. 44, No. 327, April 17, 2018, online. Pilot Project - Automated Vehicles, O Reg 306/15.
-
Artificial Intelligence and blockchains are vulnerable to cyberattacks
Technologies based on blockchains and AI imply a considerable change for our society. Being that the security of data exchanged is vital, companies must begin adopting a long-term approach right now. Many businesses develop services based on blockchains, in particular in the financial services sector. Cryptocurrencies, one example of blockchain use, transform the way in which some monetary transactions are made, far from the oversight of financial institutions and governments. With regard to AI, businesses sometimes choose technological platforms involving data sharing in order to accelerate the development of their AI tool. Quantum revolution’s impact on cybersecurity In 2016, IBM made a computer for testing several quantum algorithms, available to researchers.1 Quantum computers work in a radically different way from traditional computers. Some ten years in the future, they will be able to perform quick calculations that exceed the capacity of today’s most powerful computers. Indeed, quantum computers use the quantum properties of matter, in particular the superposition of states, to simultaneously process linked data sets. Shor’s algorithm uses the quantum properties of matter and can be used by quantum computers. Shor’s algorithm enables a quantum computer to factor a whole number very quickly, much more so than any traditional computer. This mathematical operation is the key element to decipher information that has been encrypted by several commonplace computing methods. The technology, which physicists have long been studying, now constitutes a major risk for the security of encrypted data. Data meant to remain safe and confidential are thus vulnerable to being misappropriated for unauthorized uses. Are blockchain encrypting methods sufficiently secure? There are several encrypting methods available today, with several of them needing to be strengthened to preserve data security. And these are but a few examples of vulnerability to quantum computers. SHA-2 and SHA-3 methods The US National Institute of Standards and Technology (NIST) has issued recommendations for the security of various encrypting methods.2 The SHA-2 and SHA-3 methods, namely the algorithms that ensure the integrity of blockchains by producing a “hash” of previous blocks, need to be strengthened to maintain current security levels. Signature methods used by Bitcoin and other cryptocurrencies Elliptic curve cryptography is a set of cryptography techniques using one or more properties of mathematical functions that describe elliptic curves in order to encrypt data. According to the NIST, elliptic curve cryptography will become ineffective. Worryingly, we are talking about the method used for the signature of cryptocurrencies, including the famous Bitcoin. Recent studies indicate that this method is highly vulnerable to attack by quantum computers, which, in a few years’ time, could crack these codes in under 10 minutes.3 RSA-type cryptographic algorithms RSA-type cryptographic algorithms,4 which are widely used to forward data over the Internet, are particularly vulnerable to quantum computers. This could have an impact in particular when large quantities of data need to be exchanged among several computers, for example to feed AI systems. More secure cryptographic algorithms The NIST had indicated some approaches that are more secure. An algorithm developed by Robert McEliece, mathematician and professor at Caltech, seems to be able to resist such attacks5 for now. For the longer term, we can hope that quantum technology itself makes it possible to generate secure keys. Legal and business implications of data protection Companies are required by law to protect the personal and confidential data entrusted to them by their customers. They must therefore take suitable measures to protect this valuable data. Therefore, companies must choose an AI or blockchain technology as soon as possible, while taking into account the fact that, once adopted, the technology will be used for several years and may need to survive the arrival of quantum computers. What is more, we will need to fix the security flaws of technologies that are not under the control of government authorities or of a single company. Unlike the solution with more traditional technologies, we cannot install a simple update on a single server. In some cases, it will be necessary to reconsider the very structure of a decentralized technology such as blockchain. Choosing an evolving technology The key will therefore be to choose a technology enabling businesses to meet their security obligations in a post-quantum world, or at least to choose an architecture that will enable such encrypting algorithms to be updated in a timely manner. It will therefore be necessary to establish a dialogue among computer scientists, mathematicians, physicists and…lawyers! Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal specifics, particularly the various branches and applications of artificial intelligence which will rapidly be appearing in all companies and industries. Press Release: IBM Makes Quantum Computing Available on IBM Cloud to Accelerate Innovation: https://www-03.ibm.com/press/us/en/pressrelease/49661.wss; see also: Linke, Norbert M., et al. “Experimental comparison of two quantum computing architectures.” Proceedings of the National Academy of Sciences (2017): 201618020. Chen, Lily, et al. Report on post-quantum cryptography. US Department of Commerce, National Institute of Standards and Technology, 2016. Aggarwal, Divesh, et al. “Quantum attacks on Bitcoin, and how to protect against them.” arXiv preprint arXiv:1710.10377(2017). This acronym comes from Rivest, Shamir, and Adleman, the three developers of this kind of encryption. Supra, note 2; see also Dinh, Hang, Cristopher Moore, and Alexander Russell. “McEliece and Niederreiter cryptosystems that resist quantum Fourier sampling attacks.” Annual Cryptology Conference. Springer, Berlin, Heidelberg, 2011.
-
Artificial Intelligence, Implementation and Human Resources
In this era of a new industrial revolution, dubbed as “Industry 4.0”, businesses are facing sizable technological challenges. Some refer to smart plants or the industry of the future. This revolution is characterized by the advent of new technology that allows for the “smart” automation of human activity. The aim of this technological revolution is to increase productivity, efficiency and flexibility. In some cases, it means a radical change to the corporate value chain. Artificial intelligence is an integral part of the new era. Dating back to the mid-1950s, it is typically defined as the simulation of human intelligence by machines. Artificial intelligence aims to substitute, supplement and amplify practically all tasks currently performed by humans1, becoming in effect a serious competitor to human beings in the job market. Over the past few years, the advent of deep learning and other advanced learning techniques for machines and computers have given rise to several industrial applications that have the potential to revolutionize how businesses organize the workplace. It is believed that artificial intelligence could drive a 14% increase in global GDP by 2030, a $15.7 trillion potential contribution to the global economy annually2. The productivity gains in the workplace created by artificial intelligence alone could represent half that amount. It goes without saying that the job market will have to adjust. A study published a few years ago predicted that within twenty years, close to 50% of jobs in the United States could be completely or partially automated3. In 2016, an OCDE study concluded that on average 9% of jobs in the 21 OECD countries would be at a high risk of automation4, and some experts even go so far as to claim that 85% of jobs that workers will be doing in 2030 haven’t been invented yet!5 At the very least, this data shows that while human beings are still indispensable, the job market will be strongly influenced by artificial intelligence. Whether due to the transformation of tasks, the disappearance of jobs or the creation of new trades, disruptions in the workplace are to be expected and businesses will have to deal with them. The arrival of artificial intelligence thus appears to be inevitable. In some cases, this technology will lead to a significant competitive advantage. Innovative businesses will stand out and thrive. However, in addition to the major investments that will be required, the implementation of this new technological tool will require time, effort and changes to work methods. Implementation As an entrepreneur, you have no choice but to adapt to this new reality. Not only will your employees be affected by the organizational change, they will also have to be involved to ensure its success. During the implementation phase, you may discover that new skills will be required to adjust to your new technology. It is also very likely that some of your employees and managers will be adverse to the change. This would be a normal reaction since as humans we tend to respond negatively to any sort of change. A change in the work environment can lead to a sense of insecurity, requiring that employees adopt new behaviours or work methods6 and dragging them out of their comfort zone. An employee’s fears can also be the result of misperceptions. Potential impacts must be carefully considered before your new technology arrives. The failure rate for organizational change is over 70%. It is believed that the high failure rate for the adoption of new technology is due to the fact that the human aspect is often overlooked in favour of the technological or operational benefits of implementing the technology7. Failure can lead to higher costs for introducing the new tool, productivity losses or the abandoning of the initiative. Advance planning is especially important when implementing artificial intelligence to identify any challenges related to its integration in your business. It is important that smart technology be implemented by skilled employees who share the business’ values to ensure the new system does not perpetuate unwanted behaviours. To help with your planning, here are a few questions to stimulate discussion: Implementation What is the objective of the new technology, its advantages and disadvantages? Who will be in charge of the project? What skills will be needed to implement the technology in the organization? Which employees will be responsible for implementing the technology? What information and training should they be given? Work organization What duties will be replaced or affected by the new technology and how will they be affected? What new tasks will be created after the new technology is set up? Will positions be abolished, staff transferred or jobs lost? What terms of the collective agreement will have to be considered in terms of transfers, layoffs and technological change? What notice and severance should be anticipated if there are job losses? What positions will have to be created after the technology is set up? What new skills will be required for these positions? How and when will new positions be filled? How will the users of the technology be trained? Communication Who will be in charge of communication? Should you set up communication tools and a communication plan? In what form will such communication be made and how often? When and how will employees and managers be informed of the arrival of the new technology, its purpose, its advantages and the impacts on the organization? When and how will the job losses, labour transfers and new positions be announced? What tools will be used to reassure employees and eliminate misperceptions? Mobilization What actions can be taken to engage employees and managers in the project? What are the likely reactions to the change and how can they be lessened or eliminated? What tools can managers be given to help them oversee the change? This list is not meant to be exhaustive but it can be a starting point for considering the potential impacts of new smart technology on your employees. Bear in mind that good communication with your employees and their commitment could make a difference between the success or failure of the technological change. Lavery Legal Lab on Artificial Intelligence (L3IA) Lavery has set up the Lavery Legal Lab on Artificial Intelligence (L3IA) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. Spyros Makridakis, The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms, School of Economic Sciences and Business, Neapolis University Paphos, 2017 Sizing the prize, PWC, 2017 Carl Benedikt Frey and Michael A. Osborne, The future of employment: How susceptible are jobs to computarisation Oxford University, 2013 Melanie Arntz, Terry Gregory, Ulrich Zierahn, The Risk of Automation for Jobs in OECD Countries, OECD Social, Employment and Migration Working Papers, 2016 Emerging Technologies' Impact on Society & Work in 2030, Institute for the Future and Dell Technologies, 2017 Simon L. Dolan, Éric Gosselin and Jules Carrière, Psychologie du travail et comportement organisationnel, 4th ed., Gaétan Morin Éditeur, 2012 Yves-Chantal Gagnon, Les trois leviers stratégiques de la réussite du changement technologique, Télescope - Revue d’analyse comparée en administration publique, École nationale d’administration publique du Québec, fall 2008
-
Intellectual Property and Artificial Intelligence
Although artificial intelligence has been evolving constantly in the past few years, the law sometimes has difficulty keeping pace with such developments. Intellectual property issues are especially important: businesses investing in these technologies must be sure that they can take full advantage of the commercial benefits that such technologies provide. This newsletter provides an overview of the various forms of intellectual property that are applicable to artificial intelligence. The initial instinct of many entrepreneurs would be to patent their artificial intelligence processes. However, although in some instances such a course of action would be an effective method of protection, obtaining a patent is not necessarily the most appropriate form of protection for artificial intelligence or software technologies generally. Since the major Supreme Court of the United States decision in Alice Corp. v. CLS Bank International1, it is now acknowledged that applying abstract concepts in the IT environment will not suffice to transform such concepts into patentable items. For instance, in light of that decision, a patent that had been issued for an expert system (which is a form of artificial intelligence) was subsequently invalidated by a U.S. court.2 In Canada, case law has yet to deal specifically with artificial intelligence systems. However, the main principles laid down by the Federal Court of Appeal in Schlumberger Canada Ltd. v. Canada (Commissioner of Patents)3 are still relevant to the topic. In that case, it was decided that a method of collecting, recording and analyzing data using a computer programmed on the basis of a mathematical formula was not patentable. However, in a more recent ruling, the same Court held that a data-processing technique may be patentable if it “[…] is not the whole invention but only one of a number of essential elements in a novel combination.”4 The unpatentability of an artificial intelligence algorithm in isolation is therefore to be expected. In Europe, according to Article 52 of the 1973 European Patent Convention, computer programs are not patentable. Thus the underlying programming of an artificial intelligence system would not be patentable under this legal system. Copyright is perhaps the most obvious form of intellectual property for artificial intelligence. Source codes have long been recognized as “works” within the meaning of the Canadian Copyright Act and in similar legislation in most other countries. Some jurisdictions have even enacted laws specifically aimed at software protection.5 On this issue, an earlier Supreme Court of Canada ruling in Apple Computer, Inc. v. Mackintosh Computers Ltd6 is of some interest: In that case, the Court held that computer programs embedded in ROM (read only memory) chips are works protected by copyright. A similar conclusion was reached earlier by a US Court.7 These decisions are meaningful with respect to artificial intelligence systems because they extend copyright protection not only to the codes programmed in complex languages or on advanced artificial intelligence platforms but also to the resulting object code, even on electronic media such as ROM chips. Copyright however does not protect ideas or the general principles of a particular code; it only protects the expression of those ideas or principles. In addition to copyright, the protection afforded by trade secrets should not be underestimated. More specifically, in the field of computer science, it is rare for customers to have access to the full source code. Furthermore, in artificial intelligence, source codes are usually quite complex, and it is precisely such technological complexity that contributes to its protection.8 This approach is particularly appealing for businesses providing software as a remote service. In these cases, users only have access to an interface, never to the source code or the compiled code. Therefore, it is almost impossible to reverse engineer such technology. However, when an artificial intelligence system is protected only by the concept of trade secret, there is always the risk that a leak originating with one or more employees will allow competitors to learn the source code, its structure or its particularities. It would be nearly impossible to prevent a source code from circulating online after such a leak. Companies may attempt to bolster the protection of their trade secrets with confidentiality agreements, but unfortunately this is insufficient where employees act in bad faith or in the case of industrial espionage. It would therefore be wise to implement knowledge-splitting measures within a company, so that only a restricted number of employees have access to all the critical information. Incidentally, it would be strategic for an artificial intelligence provider to make sure that its customers highlight its trademark, like the “Intel Inside” cooperative marketing strategy, to promote its system with potential customers. In the case of artificial intelligence systems sold commercially, it is also important to consider intellectual property in the learning outcomes of the systems resulting from its use. This raises the issue of ownership. Does a database generated by an artificial intelligence system developed by a software supplier while being used by one of its customers belong to the supplier or to this customer? Often, the contract between the parties will govern the situation. However a business may legitimately wish to retain the intellectual property in the databases generated by its internal use of the software, specifically where it provides it with its operational data or where it “trains” the artificial intelligence system through interaction with its employees. The desire to maintain the confidentiality of databases resulting from the use of artificial intelligence would suggest that they are assimilable to trade secrets. However, whether such databases are considered works in copyright law would be determined on a case-by-case basis. The court would also have to determine if the databases are the product of the exercise of the skill and judgment of one or more authors, as required by Canadian jurisprudence order to constitute “works”.9 Although situations where employees “train” an artificial intelligence system are more readily assimilable to an exercise of skill and judgment, cases where databases are constituted autonomously by a system could escape copyright protection “No copyright can subsist in […] data. The copyright must exist in the compilations analysis thereof”.10 In addition to the issues raised above, is the more prospective issue of the inventions created by artificial intelligence systems. So far, such systems have been used to identify research areas with opportunities for innovation. For example, data mining systems are already used to analyze patent texts, ascertain emerging fields of research, and even find “available” conceptual areas for potential patents.11 Artificial intelligence systems may be used in coming years to mechanically draft patent applications including patent claims covering potentially novel inventions.12 Can artificial intelligence have intellectual property rights, for instance, with respect to patents or copyrights? This is highly doubtful given that current legislation attributes rights to inventors and creators who must be natural persons, at least in Canada and the United States.13 The question then arises, would the intellectual property of the invention be granted to the designers of the artificial intelligence system? Our view is that at present the law is inappropriate in this regard because historically, in the area of patents, intellectual property was granted to the inventive person, and in the area of copyright, to the person who exercised skill and judgment. We also query whether a patent would be invalidated or a work enter the public domain on the ground that a substantial portion is generated by artificial intelligence (which is not the case in this newsletter!). Until that time, lawyers should familiarize themselves with the underlying concepts of artificial intelligence, and conversely, IT professionals should familiarize themselves with the concepts of intellectual property. For entrepreneurs who design or use artificial intelligence systems, constant consideration of intellectual property issues is essential to protect their achievements. Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal particularities, particularly the various branches and applications of artificial intelligence that will rapidly appear in all businesses and industries. 573 U.S._, 134 S. Ct. 2347 (2014). Vehicle Intelligence and Safety v. Mercedes-Benz, 78 F. Supp.3d 884 (2015), maintenue en appel Federal Circuit. No. 2015-1411 (U.S.). [1982] 1 C.F. 845 (C.A.F.). Canada (Procureur général) v. Amazon.com, inc., [2012] 2 RCF 459, 2011 CAF 328. For example, in Brazil: Lei do Software No. 9.609 du 19 février, 1998; en Europe : Directive 2009/24/CE concernant la protection juridique des programmes d’ordinateur. [1990] 2 RCS 209, 1990 CanLII 119 (CSC). Apple Computer, Inc. v. Franklin Computer Corp., 714 F.2d 1240 (3d Cir. 1983) (U.S.). Keisner, A., Raffo, J., & Wunsch-Vincent, S. (2015). Breakthrough technologies-Robotics, innovation and intellectual property (No. 30). World Intellectual Property Organization- Economics and Statistics Division. CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 CSC 13, [2004] 1 RCS 339. See, for example: : Geophysical Service Incorporated v. Canada-Nova-Scotia Offshore Petroleum Board, 2014 CF 450. See, for example: : Lee, S., Yoon, B., & Park, Y. (2009). An approach to discovering new technology opportunities: Keyword-based patent map approach. Technovation, 29(6), 481-497; Abbas, A., Zhang, L., & Khan, S. U. (2014). A literature review on the state-of-theart in patent analysis. World Patent Information, 37, 3-13. Hattenbach, B., & Glucoft, J. (2015). Patents in an Era of Infinite Monkeys and Artificial Intelligence. Stan. Tech. L. Rev., 19, 32. Supra, note 7.
-
When artificial intelligence is discriminatory
Artificial intelligence has undergone significant developments in the last few years, particularly in respect of what is now known as deep learning.1 This method is the extension of the neural networks which have been used for a few years for machine learning. Deep learning, as any other form of machine learning, requires that the artificial intelligence system be placed before various situations in order to react to situations which are similar to previous experiences. In the context of business, artificial intelligence systems are used, among other things, to serve the needs of customers, either directly or by supporting employees interventions. The quality of the services that the business provides is therefore increasingly dependent on the quality of these artificial intelligence systems. However, one must not make the mistake of assuming that such a computer system will automatically perform its tasks flawlessly and in compliance with the values of the business or its customers. For instance, researchers at the Carnegie Mellon University recently demonstrated that a system for presenting targeted advertising to Internet users systematically offered less well-paid positions to women than to men.2In other words, this system behaved in what could be called a sexist way. Although the researchers could not pinpoint the origin of the problem, they were of the view that it was probably a case of loss of control by the advertising placement services supplier over its automated system and they noted the inherent risks of large-scale artificial intelligence systems. Various artificial intelligence systems have had similar failures in the past, demonstrating racist behaviour, even to the point of forcing an operator to suspend access to its system.3 In this respect, the European Union passed in April 2016 a regulation pertaining to the processing of personal information which, except in some specific cases, prohibits automated decisions based on some personal data, including the “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation […]”.4 Some researchers wonder about the application of this regulation, particularly as discrimination appears in an incidental manner, without the operator of the artificial intelligence system intending it.5 In Québec, it is reasonable to believe that a business which would use an artificial intelligence system that would act in a discriminatory manner within the meaning of the Charter of Human Rights and Freedoms would be exposed to legal action even in the absence of a specific regulation such as that of the European Union. Indeed, the person responsible for an item of property such as an artificial intelligence system could incur liability in respect of the harm or damage caused by the autonomous action of such item of property. Furthermore, the failure to having put in place reasonable measures to avoid discrimination would most probably be taken into account in the legal analysis of such a situation. Accordingly, special vigilance is required when the operation of an artificial intelligence system relies on data already accumulated within the business, data from third parties (particularly what is often referred to as big data), or when the data will be fed to the artificial intelligence system by employees of the business or its users during the course of a “learning” period. All these data sources, which incidentally are subject to obligations under privacy laws, may be biased at various degrees. The effects of biased sampling are neither new nor are they restricted to the respect of human rights. It is a phenomenon which is well-known by statisticians. During the WW II, the U.S. Navy asked a mathematician named Abraham Wald to provide them with statistics on the parts of bomber planes which had been most hit for the purpose of determining what areas of these planes should be reinforced. Wald demonstrated that the data on the planes returning from missions was biased, as it did not take into account the planes that were taken down during these missions. The areas damaged on the returning planes did not need to be reinforced, rather the places which were not hit were the one that had to be. In the context of the operation of a business, an artificial intelligence system to which biased data is fed may thus make erroneous decisions – with disastrous consequences for the business on a human, economic and operation point of view. For instance, if an artificial intelligence system undergoes learning sessions conducted by employees of the business, their behaviour will undoubtedly be reflected in the system’s own subsequent behaviour. This may be apparent in the judgments made by the artificial intelligence system in respect of customer requests, but also directly in its capacity to adequately solve the technical problems submitted to it. Therefore, there is the risk of perpetuating the problematic behaviour of some employees. Researchers of the Machine Intelligence Research Institute have proposed various approaches to minimize the risks and make the machine learning of artificial intelligence systems consistent with its operator’s interests.6 According to these researchers, it would certainly be appropriate to adopt a prudent approach as to the objectives imposed on such systems in order to avoid them providing extreme or undesirable solutions. Moreover, it would be important to establish informed supervision procedures, through which the operator may ascertain that the artificial intelligence system performs, as a whole, in a manner consistent with expectations. From the foregoing, it must be noted that a business wishing to integrate an artificial intelligence system in its operations must take very seriously the implementation phase, during which the system will “learn” what is expected of it. It will be important to have in-depth discussions with the supplier on the operation and performance of his technology and to express as clearly as possible in a contract the expectations of the business as to the system to be implemented. The implementation of the artificial intelligence system in the business must be carefully planned and such implementation must be assigned to trustworthy employees and consultants who possess a high level of competence with respect to the relevant tasks. As to the supplier of the artificial intelligence system, it must be ensured that the data provided to him is not biased, inaccurate or otherwise defective, in such a way that the objectives set out in the contract as to the expected performance of the system may reasonably be reached, thus minimizing the risk of litigation arising from discriminatory or otherwise objectionable behaviour of the artificial intelligence system. Not only such litigation can be expensive, it could also harm the reputation of both the supplier and its customer. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598-617). IEEE; Datta, A., Tschantz, M. C., & Datta, A. (2015). Also see: Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112. Reese, H. (2016). Top 10 AI failures of 2016. The case of Tay, Microsoft’s system, has been much discussed in the media. Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Goodman, B., & Flaxman, S. (2016, June). EU regulations on algorithmic decision-making and a “right to explanation”. In ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Taylor, J., Yudkowsky, E., LaVictoire, P., & Critch, A. (2016). Alignment for advanced machine learning systems . Technical Report 20161, MIRI.
-
Artificial intelligence and its legal challenges
Is there a greater challenge than to write a legal article on an emerging technology that does not exist yet in its absolute form? Artificial intelligence, through a broad spectrum of branches and applications, will impact corporate and business integrity, corporate governance, distribution of financial products and services, intellectual property rights, privacy and data protection, employment, civil and contractual liability, and a significant number of other legal fields. What is artificial intelligence? Artificial intelligence is “the science and engineering of making intelligence machines, especially intelligent computer programs”.1 Essentially, artificial intelligence technologies aim to allow machines to mimic “cognitive” functions of humans, such as learning and problem solving, in order for them to conduct tasks that are normally performed by humans. In practice, the functions of artificial intelligence are achieved by accessing and analyzing massive data (also known as “big data”) via certain algorithms. As set forth in a report published by McKinsey & Company in 2013 on disruptive technologies, “[i]mportant technologies can come in any field or emerge from any scientific discipline, but they share four characteristics: high rate of technological change, broad potential scope of impact, large economic value that could be affected, and substantial potential for disruptive economic impact”.2 Despite the interesting debate over the impact of artificial intelligence on humanity,3 the development of artificial intelligence has been on an accelerated path in recent years and we witnessed some major breakthroughs. In March 2016, Google’s computer program AlphaGo beat a world champion Go player, Lee Sedol, by 4 to 1 in the ancient Chinese board game. The breakthroughs reignited the world’s interest in artificial intelligence. Technology giants like Google and Microsoft, to name a few, have increased their investments in the research and development of artificial intelligence. This article will discuss some of the applications of artificial intelligence from a legal perspective and certain areas of law that will need to adapt - or be adapted - to the complex challenges brought by current and new developments in artificial intelligence. Legal challenges Artificial intelligence and its potential impacts have been compared to those of the Industrial Revolution, a form of transition to new manufacturing processes using new systems and innovative applications and machines. Health care L’intelligence artificielle est certes promise à un bel avenir dans le Artificial intelligence certainly has a great future in the health care industry. Applications of artificial intelligence with abilities to analyze massive data can make such applications a powerful tool to predict drug performance and help patients find the right drug or dosage that matches with their situation. For example, IBM’s Watson Health program “is able to understand and extract key information by looking through millions of pages of scientific medical literature and then visualize relationships between drugs and other potential diseases”.4 Some features of artificial intelligence can also help to verify if the patient has taken his or her pills through an application on smartphones, which captures and analyzes evidence of medication ingestion. In addition to privacy and data protection concerns, the potential legal challenges faced by artificial intelligence applications in the health care industry will include civil and contractual liabilities. If a patient follows the recommendation made by an artificial intelligence system and it turns out to be the wrong recommendation, who will be held responsible? It also raises legitimate complex legal questions, combined with technological concerns, as to the reliability of artificial intelligence programs and software and how employees will deal with such applications in their day-to-day tasks. Customer services A number of computer programs have been created to make conversation with people via audio or text messages. Companies use such programs for their customer services or for entertainment purposes, for example in messaging platforms like Facebook, Messenger and Snapchat. Although such programs are not necessarily pure applications of artificial intelligence, some of their features, actual or in development, could be considered as artificial intelligence. When such computer programs are used to enter into formal contracts (e.g., placing orders, confirming consent, etc.), it is important to make sure the applicable terms and conditions are communicated to the individual at the end of the line or that a proper disclaimer is duly disclosed. Contract enforcement questions will inevitably be raised as a result of the use of such programs and systems. Financial industry and fintech In recent years, many research and development activities have been carried out in the robotic, computer and tech fields in relation to financial services and the fintech industry. The applications of artificial intelligence in the financial industry will vary from a broad spectrum of branches and programs, including analyzing customers’ investing behaviours or analyzing big data to improve investment strategies and the use of derivatives. Legal challenges associated with artificial intelligence’s applications in the financial industry could be related, for example, to the consequences of malfunctioning algorithms. The constant relationship between human interventions and artificial intelligence systems, for example, in a stock trading platform, will have to be carefully set up to avoid, or at least confine, certain legal risks. Autonomous vehicles Autonomous vehicles are also known as “self-driving cars”, although the vehicles currently permitted to be on public roads are not completely autonomous. In June 2011, the state of Nevada became the first jurisdiction in the world to allow autonomous vehicles to operate on public roads. According to Nevada law, an autonomous vehicle is a motor vehicle that is “enabled with artificial intelligence and technology that allows the vehicle to carry out all the mechanical operations of driving without the active control or continuous monitoring of a natural person”.5 Canada has not adopted any law to legalize autonomous cars yet. Among the significant legal challenges facing autonomous cars, we note the issues of liability and insurance. When a car drives itself and an accident happens, who should be responsible? (For additional discussion of this subject under Québec law, refer to the Need to Know newsletter, “Autonomous vehicles in Québec: unanswered questions” by Léonie Gagné and Élizabeth Martin-Chartrand.) We also note that interesting arguments will be raised respecting autonomous cars carrying on commercial activities in the transportation industry such as shipping and delivery of commercial goods. Liability regimes The fundamental nature of artificial intelligence technology is itself a challenge to contractual and extra-contractual liabilities. When a machine makes or pretends to make autonomous decisions based on the available data provided by its users and additional data autonomously acquired from its own environment and applications, its performance and the end-results could be unpredictable. In this context, Book Five of the Civil Code of Québec (CCQ) on obligations brings highly interesting and challenging legal questions in view of anticipated artificial intelligence developments: Article 1457 of the CCQ states that: Every person has a duty to abide by the rules of conduct incumbent on him, according to the circumstances, usage or law, so as not to cause injury to another. Where he is endowed with reason and fails in this duty, he is liable for any injury he causes to another by such fault and is bound to make reparation for the injury, whether it be bodily, moral or material in nature. He is also bound, in certain cases, to make reparation for injury caused to another by the act, omission or fault of another person or by the act of things in his custody. Article 1458 of the CCQ further provides that: Every person has a duty to honour his contractual undertakings. Where he fails in this duty, he is liable for any bodily, moral or material injury he causes to the other contracting party and is bound to make reparation for the injury; neither he nor the other party may in such a case avoid the rules governing contractual liability by opting for rules that would be more favourable to them. Article 1465 of the CCQ states that: The custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault. The issues of foreseeable damages or direct damages, depending on the liability regime, and of the “autonomous act of the thing” will inescapably raise interesting debates in the context of artificial intelligence applications in the near future. In which circumstances the makers or suppliers of artificial intelligence applications, the end-users and the other parties benefiting from such applications could be held liable – or not – in connection with the results produced by artificial intelligence applications and the use of such results? Here again, the link between human interventions - or the absence of human interventions - with artificial intelligence systems in the global chain of services, products and outcomes provided to a person will play an important role in the determination of such liability. Among the questions that remain unanswered, could autonomous systems using artificial intelligence applications be “personally” held liable at some point? And how are we going to deal with potential legal loopholes endangering the rights and obligations of all parties interacting with artificial intelligence? In January 2017, the Committee on Legal Affairs of European Union (“EU Committee”) submitted a motion to the European Parliament which calls for legislation on issues relating to the rising of robotics. In the recommendations of the EU Committee, liability law reform is raised as one of the crucial issues. It is recommended that “the future legislative instrument should provide for the application of strict liability as a rule, thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behavior of a robot and the damage suffered by an injured party”.6 The EU Committee also suggests that the European Parliament considers implementing a mandatory insurance scheme and/or a compensation fund to ensure the compensation of the victims. What is next on the artificial intelligence front? While scientists are developing artificial intelligence at a speed faster than ever in many different fields and sciences, some areas of the law may need to be adapted to deal with associated challenges. It is crucial to be aware of the legal risks and to make informed decisions when considering the development and use of artificial intelligence. Artificial intelligence will have to learn to listen, to appreciate and understand concepts and ideas, sometimes without any predefined opinions or beacons, and be trained to anticipate, just like human beings (even if some could argue that listening and understanding remain difficult tasks for humans themselves). And at some point in time, artificial intelligence developments will get their momentum when two or more artificial intelligence applications are combined to create a superior or ultimate artificial intelligence system. The big question is, who will initiate such clever combination first, humans or the artificial intelligence applications themselves? John McCarthy, What is artificial intelligence?, Stanford University. Disruptive technologies: Advances that will transform life, business, and the global economy, McKinsey Global Institute, May 2013. Alex Hern, Stephen Hawking: AI will be “either best or worst thing” for humanity, theguardian. Engene Borukhovich, How will artificial intelligence change healthcare?, World Economic Forum. Nevada Administrative Code Chapter 482A-Autonomous Vehicles, NAC 482A.010. Committee on Legal Affairs, Draft report with recommendations to the Commission on Civil Law Rules on Robotics, article 27. (2015/2103 (INL))
-
Autonomous cars will shortly be on the roads in Montréal
Autonomous cars have really taken off in the last few years, particularly due to the interest of both consumers and the businesses who develop and improve them. In this context, on April 5 and 10, 2017, the City of Montréal and the Government of Québec respectively announced significant investments in the electrification and intelligent transportation sector to make the Province of Québec a pioneer of that industry. Investments from the City of Montréal and the Government of Québec The City of Montréal intends to invest $3.6M toward the creation of the Institute on Electrification and Intelligent Transportation, created as a part of the Transportation Electrification Strategy developed to fight climate change and promote innovation. The creation of the Institute on Electrification and Intelligent Transportation is one of the ten strategic orientations that the Transportation Electrification Strategy puts forward. The City of Montréal explains that [TRANSLATION] “the Institute will rely on the collaboration of partners, including universities and the Innovation District, and on the availability of land near downtown Montréal in order to create a world-class site to develop, experiment and promote innovation and new concepts in the field of electric and intelligent transportation ”.1 The mission of the Institute is, among other things, to create a testing corridor and an experimentation area in downtown Montréal for autonomous vehicles. In addition, an autonomous shuttle project is already under way, involving “Arma” minibuses developed by Navya, a partner of the Keolis Group. These vehicles are automated at level 5, meaning that they are entirely automated. The first road test is anticipated to take place in the context of the International Association of Public Transport’s (UITP) Global Public Transport Summit, which will be held in Montréal from May 15 to 17, 2017. For its part, the Government of Québec has undertaken to invest $4.4M [TRANSLATION] “to support the electric and intelligent vehicles industrial cluster”2. This industrial cluster will be set up in spring 2017 and its business plan will be established by an advisory committee created for such purpose. [TRANSLATION] “The cluster will help position Québec among the world leaders in the development of ground transportation and their transition to an all-electric and intelligent transportation” stated Dominique Anglade, Minister of Economy, Science and Innovation and Minister responsible for the Digital Strategy. Issues related to driving autonomous vehicles in Québec Intelligent cars were introduced in the Québec market and have earned their place over the last few years. They are referred to as autonomous when they possess at least a “conditional” degree of automation, commonly referred to as level 3 on the scale of automation degrees.3 This level of automation allows for dynamic driving of the vehicle by its control system but requires the driver to remain available. Under the Québec Automobile Insurance Act4, the owner of an automobile is liable for the property damage caused by such automobile with some exceptions. This statute also provides for a no-fault liability regime allowing victims of a car accident to claim an indemnity for the bodily injuries they suffer. As to the Highway Safety Code5, it governs, among other things, the use of vehicles on public roads. To our knowledge, no legislative amendment has been proposed to this day to fill this legal void prior to autonomous vehicles appearing on the Québec roads. In this regard, it is appropriate to note that the Province of Ontario recently passed the Regulation 306/156, which outlines who may drive autonomous vehicles on Ontario roads and in which context. Comments Many questions remain unanswered as to the content of the projects and initiatives recently announced by the City of Montréal and the Government of Québec. This lack of information creates uncertainty as to the scope of specific regulations governing the use of autonomous vehicles in the Province of Québec which would possibly need to be passed. However, Ms. Elsie Lefebvre, Associate councilor for the City of Montréal, responsible for the Transportation Electrification Strategy, declared that [TRANSLATION] “there will be guidelines and the projects will be supervised to ensure that there is no danger on the road”, without giving details on the scope of such measures. In the wake of these announcements, many issues deserve to be discussed. What will be the degree of automation of the autonomous vehicles allowed to be driven in the Province of Québec? Who will drive these vehicles and who will insure them? Will special permits be required? Will these vehicles be allowed to be driven on public roads or exclusively on closed circuits? In the event of an accident, who will be held liable? What will be the legislative measures passed to adequately govern the use of these vehicles? Many questions remain and not many answers are provided for the time being. This is something to follow… Transportation Electrification Strategy 2016-2020, published by the City of Montréal. GOVERNMENT OF QUÉBEC, Information feed – “Québec annonce 4,4 millions de dollars pour soutenir la grappe industrielle des véhicules électriques et intelligents”, online. For more details, please consult the Need to Know newsletter, “Autonomous vehicles in Québec: unanswered questions”. Automobile Insurance Act, CQLR, c. A-25. Highway Safety Code, CQLR, c. C-24.2, art. 1. Pilot Project – Automated Vehicules, O Reg 306/15.