Search results for  intelligence artificielle

Publications

  • Can artificial intelligence be designated as an inventor in a patent application?

    Artificial intelligence (“AI”) is becoming increasingly sophisticated, and the fact that this human invention can now generate its own inventions opens the door to new ways of conceptualizing the notion of “inventor” in patent law. In a recent ruling, the Supreme Court of the United Kingdom (“UK Supreme Court”) however found that an artificial intelligence system cannot be the author of an invention within the meaning of the applicable regulations under which patents are granted. This position is consistent with that of several courts around the world that have already ruled on the issue. But what of Canada, where the courts have yet to address the matter? In this bulletin, we will take a look at the decisions handed down by the UK Supreme Court and its counterparts in other countries before considering Canada’s position on the issue. In Thaler (Appellant) v Comptroller-General of Patents, Designs and Trade Mark,1 the UK Supreme Court ruled that “an inventor must be a person”. Summary of the decision In 2018, Dr. Stephen Thaler filed patent applications for two inventions described as having been generated by an autonomous AI system. The machine in question, DABUS, was therefore designated as the inventor in the applications. Dr. Thaler claimed that, as the owner of DABUS, he was entitled to file patent applications for inventions generated by his machine. That being so, he alleged that he was not required to name a natural person as the inventor. Both the High Court of Justice and the Court of Appeal dismissed Dr. Thaler’s appeal from the decision of the Intellectual Property Office of the United Kingdom not to proceed with the patent applications, in particular because the designated inventor was not valid under the Patents Act 1977. The UK Supreme Court, the country’s final court of appeal, also dismissed Dr. Thaler’s appeal. In a unanimous decision, it concluded that the law is clear in that “an inventor within the meaning of the 1977 Act must be a natural person, and DABUS is not a person at all, let alone a natural person: it is a machine”.2 Although there was no doubt that DABUS had created the inventions in question, that did not mean that the courts could extend the notion of inventor, as defined by law, to include machines. An ongoing trend The UK Supreme Court is not the first to reject Dr. Thaler’s arguments. The United States,3 the European Union4 and Australia5 have adopted similar positions, concluding that only a natural person can qualify as an inventor within the meaning of the legislation applicable in their respective jurisdictions. The UK ruling is part of the Artificial Inventor Project’s cross-border attempt to ensure that the DABUS machine—and AI in general—is recognized as a generative tool capable of generating patent rights for the benefit of AI system owners. To date, only South Africa has issued a patent to Dr. Thaler, naming DABUS as the inventor.6 This country is the exception that proves the rule. It should however be noted that the Companies and Intellectual Property Commission of South Africa does not review applications on their merits. As such, no reason was given for considering AI as the inventor. More recently, in February of this year, the United States Patent and Trademark Office issued a guidance on AI-assisted inventions. The guidance confirms the judicial position and states in particular that “a natural person must have significantly contributed to each claim in a patent application or patent”.7 What about Canada? In 2020, Dr. Thaler also filed a Canadian patent application for inventions generated by DABUS.8 The Canadian Intellectual Property Office (“CIPO”) issued a notice of non-compliance in 2021, establishing its initial position as follows: Because for this application the inventor is a machine and it does not appear possible for a machine to have rights under Canadian law or to transfer those rights to a human, it does not appear this application is compliant with the Patent Act and Rules.9 However, CIPO specified that it was open to receiving the applicant’s arguments on the issue, as follows: Responsive to the compliance notice, the applicant may attempt to comply by submitting a statement on behalf of the Artificial Intelligence (AI) machine and identify, in said statement, himself as the legal representative of the machine.10 To date, CIPO has issued no notice of abandonment and the application remains active. Its status in Canada is therefore unclear. It will be interesting to see whether Dr. Thaler will try to sway the Canadian courts to rule in his favour after many failed attempts in other jurisdictions around the world, and most recently in the UK Supreme Court. At first glance, the Patent Act11 (the “Act”) does not prevent an AI system from being recognized as the inventor of a patentable invention. In fact, the term “inventor” is not defined in the Act. Furthermore, nowhere is it stated that an applicant must be a “person,” nor is there any indication to that effect in the provisions governing the granting of patents. The Patent Rules12 offer no clarification in that regard either. The requirement implied by the clear use of the term “person” in the wording of the relevant sections of the law is important: It was a key consideration that the UK Supreme Court analyzed in Thaler.   Case law on the subject is still ambiguous. According to the Supreme Court of Canada, given that the inventor is the person who took part in conceiving the invention, the question to ask is “[W]ho is responsible for the inventive concept?”13 That said, however, we note that the conclusion reached was that a legal person—as opposed to a natural person—cannot be considered an inventor.14 The fact is that the Canadian courts have never had to rule on the specific issue of recognizing AI as an inventor, and until such time as the courts render a decision or the government takes a stance on the matter, the issue will remain unresolved. Conclusion Given that Canadian law is not clear on whether AI can be recognized as an inventor, now would be a good time for Canadian authorities to clarify the issue. As the UK Supreme Court has suggested, the place of AI in patent law is a current societal issue, one that the legislator will ultimately have to settle.15 As such, it is only a matter of time before the Act is amended or CIPO issues a directive. Moreover, in addition to having to decide whether AI legally qualifies as an inventor, Canadian authorities will have to determine whether a person can be granted rights to an invention that was actually created by AI. The question as to whether an AI system owner can hold a patent on an invention generated by their machine was raised in Thaler. Once again, unlike the UK’s patent act,16 our Patent Act does not close the door to such a possibility. Canadian legislation contains no comprehensive list of the categories of persons to whom a patent may be granted, for instance. If we were to rewrite the laws governing intellectual property, given that the main purpose such laws is to encourage innovation and creativity, perhaps a better approach would be to allow AI system owners to hold patent rights rather than recognizing the AI as an inventor. Patent rights are granted on the basis of an implicit understanding: A high level of protection is provided in exchange for sufficient disclosure to enable a person skilled in the art to reproduce an invention. This ensures that society benefits from such inventions and that inventors are rewarded. Needless to say, arguing that machines need such an incentive is difficult. Designating AI as an inventor and granting it rights in that respect is therefore at odds with the very purpose of patent protection. That said, an AI system owner who has invested time and energy in designing their system could be justified in claiming such protection for the inventions that it generates. In such a case and given the current state of the law, the legislator would likely have to intervene. Would this proposed change spur innovation in the field of generative AI? We are collectively investing a huge amount of “human” resources in developing increasingly powerful AI systems. Will there come a time when we can no longer consider that human resources were involved in making AI-generated technologies? Should it come to that, giving preference to AI system owners could become counterproductive. In any event, for the time being, a sensible approach would be to emphasize the role that humans play in AI-assisted inventions, making persons the inventors rather than AI. As concerns inventions conceived entirely by an AI system, trade secret protection may be a more suitable solution. The professionals on our intellectual property team are at your disposal to assist you with patent registration and provide you with a clearer understanding of the issues involved. [2023] UKSC 49 [Thaler]. Ibid., para. 56. See the decision of the United States Court of Appeals for the Federal Circuit in Thaler v Vidal, 43 F. 4th 1207 (2022), application for appeal to the Supreme Court of the United States dismissed. See the decision of the Boards of Appeal of the European Patent Office in J 0008/20 (Designation of inventor/DABUS) (2021), request to refer questions to the Enlarged Board of Appeal denied. See the decision of the Full Court of the Federal Court of Australia in Commissioner of Patents v Thaler, [2022] FCAFC 62, application for special leave to appeal to the High Court of Australia denied. ZA 2021/03242. Federal Register: Inventorship Guidance for AI-Assisted Inventions. CA 3137161. Notice from CIPO dated February 11, 2022, in Canadian patent application 3137161. Ibid. R.S.C., 1985, c. P-4. SOR/2019-251. Apotex Inc. v. Wellcome Foundation Ltd., 2002 SCC 77 at paras. 96–97. Sarnoff Corp. v. Canada (Attorney General), 2008 FC 712, para. 9. Thaler, paras. 48–49, 79. Ibid., para. 79.

    Read more
  • The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  • Smart product liability: issues and challenges

    Introduction In 2023, where do we stand in terms of liability where smart products are concerned? The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.  These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being. For the purposes of our analysis, the term “smart products” refers to products that have: Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control. Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings. Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2 These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”). Applying general law rules of liability to smart products Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law. The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products. There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4  But where does human accountability come in? Lack of transparency in AI and product liability When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5 The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8 Manufacturers have two means to absolve themselves from liability:9 A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge. This last means is specifically aimed at the risks inherent to technological innovation.10 That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability. Challenges in apportioning liability and human accountability There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor. In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability. Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.” There are two types of custodians of smart products: The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody); The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13. Either one of these custodians could be held liable should it contribute to the harm through its own fault. As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex. C-27: canadian bill on artificial intelligence Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act. If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15 Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights. In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17 Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18 Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”. Oversight: Providing Systems over which human oversight can be exercised. Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes. Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm. Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems. Robustness: Ensuring that Systems operate as intended. To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19 Conclusion Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries. That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country. Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu). GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq. Consumer Protection Act, CQLR c. P-40.1, s. 38. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72. Brousseau, para. 100. Brousseau, para. 102. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry. Bill C-27, summary and s. 5(1). The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca. Bill C-27, s. 39(a). AIDA, Companion document Bill C-27, s. 8. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.

    Read more
  • Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  • SOCAN Decision: Online music distributors must only pay a single royalty fee

    In Society of Composers, Authors and Music Publishers of Canada v. Entertainment Software Association1 (the “SOCAN Decision”), the Supreme Court of Canada ruled on the obligation to pay a royalty for making a work available to the public on a server, where it can later be streamed or downloaded. At the same time, it clarified the applicable standard of review for appeals where administrative bodies and courts share concurrent first instance jurisdiction and revisited the purpose of the Copyright Act2and its interpretation in light of the WIPO Copyright Treaty3. The Supreme Court also took the opportunity to reiterate the importance of the principle of technological neutrality in the application and interpretation of the Copyright Act. This reminder can also be applied to other artistic mediums and is very timely in a context where the digital visual arts market is experiencing a significant boom with the production and sale of non-fungible tokens (“NFTs”). In 2012, Canadian legislators amended the Copyright Act by adopting the Copyright Modernization Act4. These amendments incorporate Canada’s obligations under the Treaty into Canadian law by harmonizing the legal framework of Canada’s copyright laws with international rules on new and emerging technologies. The CMA introduced three sections related to “making [a work] available,” including section 2.4(1.1) of the CMA. This section applies to original works and clarifies section 3(1)(f), which gives authors the exclusive right to “communicate a work  to the public by telecommunication”: 2.4(1.1) Copyright Act. “For the purposes of this Act, communication of a work or other subject-matter to the public by telecommunication includes making it available to the public by telecommunication in a way that allows a member of the public to have access to it from a place and at a time individually chosen by that member of the public.” Before the CMA came into force, the Supreme Court also found that downloading a musical work from the Internet was not a communication by telecommunication within the meaning of section 3(1)(f) of the CMA5, while streaming was covered by this section.6 Following the coming into force of the CMA, the Copyright Board of Canada (the “Board”) received submissions regarding the application of section 2.4(1.1) of the Copyright Act. The Society of Composers, Authors and Music Publishers of Canada (“SOCAN”) argued, among other things, that section 2.42.4(1.1) of the Copyright Act required users to pay royalties when a work was published on the Internet, making no distinction between downloading, streaming and cases where works are published but never transmitted. The consequence of SOCAN’s position was that a royalty had to be paid each time a work was made available to the public, whether it was downloaded or streamed. For each download, a reproduction royalty also had to be paid, while for each stream, an additional performance royalty had to be paid. Judicial history The Board’s Decision7 The Board accepted SOCAN’s interpretation that making a work available to the public is a “communication”. According to this interpretation, two royalties are due when a work is published online. Firstly,  when the work is made available to the public online, and secondly, when it is streamed or downloaded. The Board’s Decision was largely based on its interpretation of Section 8 of the Treaty, according to which the act of making a work available requires separate protection by Member States and constitutes a separately compensable activity. Federal Court of Appeal’s Decision8 Entertainment Software Association, Apple Inc. and their Canadian subsidiaries (the “Broadcasters”) appealed the Board’s Decision before the Federal Court of Appeal (“FCA”). Relying on the reasonableness standard, the FCA overturned the Board’s Decision, affirming that a royalty is due only when the work is made available to the public on a server, not when a work is later streamed. The FCA also highlighted the uncertainty surrounding the applicable review standard in appeals following Vavilov9 in cases where administrative bodies and courts share concurrent first instance jurisdiction. SOCAN Decision The Supreme Court dismissed SOCAN’s appeal seeking the reinstatement of the Board’s Decision. Appellate standards of review The Supreme Court recognized that there are rare and exceptional circumstances that create a sixth category of issues to which the standard of correctness applies, namely concurrent first instance jurisdiction between courts and administrative bodies. Does section 2.4(1.1) of the Copyright Act entitle the holder of a copyright to the payment of a second royalty for each download or stream after the publication of a work on a server, making it publicly accessible? The copyright interests provided by section 3(1) of the Copyright Act The Supreme Court began its analysis by considering the three copyright interests protected by the Copyright Act, or in other words, namely the rights provided for in section 3(1): to produce or reproduce a work in any material form whatsoever; to perform the work in public; to publish an unpublished work. These three copyright interestsare distinct and a single activity can only engaged one of them. For example, the performance of a work is considered impermanent, allowing the author to retain greater control over their work than reproduction. Thus, “when an activity allows a user to experience a work for a limited period of time, the author’s performance right is engaged. A reproduction, by contrast, gives a user a durable copy of a work”.10 The Supreme Court also emphasized that an activity not involving one of the three copyright interests under section 3(1) of the Copyright Act or the author’s moral rights is not protected by the Copyright Act. Accordingly, no royalties should be paid in connection with such an activity. The Court reiterated its previous view that downloading a work and streaming a work are distinct protected activities, more precisely  downloading is considered reproduction, while streaming is considered performance. It also pointed out that downloading is not a communication under section 3(1)(f) of the Copyright Act, and that making a work available on a server is not a compensable activity distinct from the three copyright interests.11 Purpose of the Copyright Act and the principle of technological neutrality The Supreme Court criticized the Board’s Decision, opining that it violates the principle of technological neutrality, in particular by requiring users to pay additional fees to access online works. The purpose of the CMA was to “ensure that [the Copyright Act] remains technologically neutral”12 and thereby show, at the same time, Canada’s adherence to the principle of technological neutrality. The principle of technological neutrality is further explained by the Supreme Court: [63] The principle of technological neutrality holds that, absent parliamentary intent to the contrary, the Copyright Act should not be interpreted in a way that either favours or discriminates against any form of technology: CBC, at para. 66. Distributing functionally equivalent works through old or new technology should engage the same copyright interests: Society of Composers, Authors and Music Publishers of Canada v. Bell Canada, 2012 SCC 36, [2012] 2 S.C.R. 326, at para. 43; CBC, at para. 72. For example, purchasing an album online should engage the same copyright interests, and attract the same quantum of royalties, as purchasing an album in a bricks-and-mortar store since these methods of purchasing the copyrighted works are functionally equivalent. What matters is what the user receives, not how the user receives it: ESA, at paras. 5-6 and 9; Rogers, at para. 29. In its summary to the CMA, which precedes the preamble, Parliament signalled its support for technological neutrality, by stating that the amendments were intended to “ensure that [the Copyright Act] remains technologically neutral”. According to the Supreme Court, the principle of technological neutrality must be observed in the light of the purpose of the Copyright Act, which does not exist solely for the protection of authors’ rights. Rather, the Act seeks to strike a balance between the rights of users and the rights of authors by facilitating the dissemination of artistic and intellectual works aiming to enrich society and inspire other creators. As a result, “[w]hat matters is what the user receives, not how the user receives it.”13 Thus, whether the reproduction or dissemination of the work takes place online or offline, the same copyright applies and leads to the same royalties. What is the correct interpretation of section 2.4(1.1) of the Copyright Act? Section 8 of the Treaty The Supreme Court reiterated that international treaties are relevant at the context stage of the statutory interpretation exercise and they can be considered without textual ambiguity in the statute.14 Moreover, wherethe text permits, it must be interpreted so as to comply with Canada’s treaty obligations, in accordance with the presumption of conformity, which states that a treaty cannot override clear legislative intent.15 The Court concluded that section 2.4(1.1) of the Copyright Act was intended to implement Canada’s obligations under Section 8 of the Treaty, and that the Treaty must therefore be taken into account in interpreting section 2.4(1.1) of the Act. Although Section 8 of the Treaty gives authors the right to control making works available to the public, it does not create a new and protected “making available” right that would be separately compensable. In such cases, there are no “distinct communications” or in other words, “distinct performances”.16 Section 8 of the Treaty creates only two obligations: “protect on demand transmissions; and give authors the right to control when and how their work is made available for downloading or streaming.”17 Canada has the freedom to choose how these two objectives are implemented in the Copyright Act, either through the right of distribution, the right of communication to the public, the combination of these rights, or a new right.18 The Supreme Court concluded that the Copyright Act gives effect to the obligations arising from Section 8 of the Treaty through a combination of the performance, reproduction, and authorization rights provided for in section 3(1) of the Copyright Act, and by respecting the principle of technological neutrality.19 Which interpretation of section 2.4(1.1) of the Copyright Act should be followed? The purpose of section 2.4(1.1) of the Copyright Act is to clarify the communication right in section 3(1)(f) of the Copyright Act by emphasizing its application to on-demand streaming. A single on-demand stream to a member of the public thus constitutes a “communication to the public” within the meaning of section 3(1)(f) of the Copyright Act.20 Section 2.4(1.1) of the Copyright Act states that a work is performed as soon as it is made available for on-demand streaming.21 Therefore, streaming is only a continuation of the performance of the work, which starts when the work is made available. Only one royalty should be collected in connection with this right: [100] This interpretation does not require treating the act of making the work available as a separate performance from the work’s subsequent transmission as a stream. The work is performed as soon as it is made available for on-demand streaming. At this point, a royalty is payable. If a user later experiences this performance by streaming the work, they are experiencing an already ongoing performance, not starting a new one. No separate royalty is payable at that point. The “act of ‘communication to the public’ in the form of ‘making available’ is completed by merely making a work available for on?demand transmission. If then the work is actually transmitted in that way, it does not mean that two acts are carried out: ‘making available’ and ‘communication to the public’. The entire act thus carried out will be regarded as communication to the public”: Ficsor, at p. 508. In other words, the making available of a stream and a stream by a user are both protected as a single performance — a single communication to the public. In summary, the Supreme Court stated and clarified the following in the SOCAN Decision: Section 3(1)(f) of the Copyright Act does not cover download of a work. Making a work available on a server and streaming the work both involve the same copyright interest to the performance of the work. As a result, only one royalty must be paid when a work is uploaded to a server and streamed. This interpretation of section 2.4(1.1) of the Copyright Act is consistent with Canada’s international obligations for copyright protection. In cases of concurrent first instance jurisdiction between courts and administrative bodies, the standard of correctness should be applied. As artificial intelligence works of art increase in amount and as a new market for digital visual art emerges, driven by the public’s attraction for the NFT exchanges, the principle of technological neutrality is becoming crucial for understanding the copyrights attached to these new digital objects and their related transactions. Fortunately, the issues surrounding digital music and its sharing and streaming have paved the way for rethinking copyright in a digital context. It should also be noted that in decentralized and unregulated digital NFT markets, intellectual property rights currently provide the only framework that is really respected by some market platforms and may call for some degree of intervention on the part of the market platforms’ owners. 2022 SCC 30. R.S.C. (1985), c. C-42 (hereinafter the “Copyright Act”). Can. T.S. 2014 No. 20, (hereinafter the “Treaty”). S.C. 2012, c. 20 (hereinafter the “CMA”). Entertainment Software Association v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 34. Rogers Communications Inc. v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 35. Copyright Board of Canada, 2017 CanLII 152886 (hereinafter the “Board’s Decision”). Federal Court of Appeal, 2020 FCA 100 (hereinafter the “FCA’s Decision”). Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65. SOCAN Decision, par. 56. Ibid, para. 59. CMA, Preamble. SOCAN Decision, para. 70, emphasis added by the SCC. Ibid, paras. 44-45. Ibid, paras. 46-48. Ibid, paras. 74-75. Ibid, para. 88. Ibid, para. 90. Ibid, paras. 101 and 108. Ibid, paras. 91-94. Ibid, paras. 95 and 99-100.

    Read more
  • Artificial intelligence soon to be regulated in Canada?

    For the time being, there are no specific laws governing the use of artificial intelligence in Canada. Certainly, the laws on the use of personal information and those that prohibit discrimination still apply, no matter if the technologies involved are so-called artificial intelligence technologies or conventional ones. However, the application of such laws to artificial intelligence raises a number of questions, especially when dealing with “artificial neural networks,” because the opacity of the algorithms behind these makes it difficult for those affected to understand the decision-making mechanisms at work. Such artificial neural networks are different in that they provide only limited explanations as to their internal operation. On November 12, 2020, the Office of the Privacy Commissioner of Canada (OPC) published its recommendations for a regulatory framework for artificial intelligence.1 Pointing out that the use of artificial intelligence requiring personal information can have serious privacy implications, the OPC has made several recommendations, which involve the creation of the following, in particular: A requirement for those who develop such systems to ensure that privacy is protected in the design of artificial intelligence systems; A right for individuals to obtain an explanation, in understandable terms, to help them understand decisions made about them by an artificial intelligence system, which would also involve the assurance that such explanations are based on accurate information and are not discriminatory or biased; A right to contest decisions resulting from automated decision making; A right for the regulator to require evidence of the above. It should be noted that these recommendations include the possibility of imposing financial penalties on companies that would fail to abide by this regulatory framework. Moreover, contrary to the approach adopted in the General Data Protection Regulation and the Government of Quebec’s Bill 64, the rights to explanation and contestation would not be limited solely to automated decisions, but would also cover cases where an artificial intelligence system assists a human decision-maker. It is likely that these proposals will eventually provide a framework for the operation of intelligence systems already under development. It would thus be prudent for designers to take these recommendations into account and incorporate them into their artificial intelligence system development parameters as of now. Should these recommendations be adopted, it will also become necessary to consider how to explain the mechanisms behind the systems making or suggesting decisions based on artificial intelligence. As mentioned in these recommendations, “while trade secrets may require organizations to be careful with the explanations they provide, some form of meaningful explanation should always be possible without compromising intellectual property.”2 For this reason, it may be crucial to involve lawyers specializing in these matters from the start when designing solutions that use artificial intelligence and personal information. https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/reg-fw_202011/ Ibid.

    Read more
  • Use of patents in artificial intelligence: What does the new CIPO report say?

    Artificial intelligence is one of the areas of technology where there is currently the most research and development in Canada. To preserve Canada's advantageous position in this area, it is important to consider all forms of intellectual property protection that may apply. Although copyright has historically been the preferred form of intellectual property in computer science, patents are nevertheless very useful in the field of artificial intelligence. The monopoly they grant can be an important incentive to foster innovation. This is why the Canadian Intellectual Property Office (CIPO) felt the need to report on the state of artificial intelligence and patents in Canada. In its report titled Processing Artificial Intelligence: Highlighting the Canadian Patent Landscape published in October 2020, CIPO presents statistics that clearly demonstrate the upward trend in patent activity by Canadian researchers in the area of artificial intelligence. However, this increase remains much less marked than those observed in the United States and China, the champions in the field. Nevertheless, Canada ranked sixth in the world in the number of patented inventions attributed to Canadian researchers and institutions. International patent activity in AI between 1998 et 2017 Reproduced with the permission of the Minister of Industry, 2020   International patent activity by assignee's country of origin in AI between 1998 and 2017 Reproduced with the permission of the Minister of Industry, 2020   Canadian researchers are particularly specialized in natural language processing, which is not surprising for a bilingual country. But their strengths also lie in knowledge representation and reasoning, and in computer vision and robotics. We can also see that, generally speaking, the most active areas of application for artificial intelligence in Canada are in life sciences and medicine and computer networks, followed by energy management, in particular. This seems to be a natural fit for Canada, a country with well-developed healthcare systems and telecommunications and energy infrastructure that reflects its vast territory. The only shortcoming is the lack of representation of women in artificial intelligence patent applications in Canada. This is an important long-term issue, since maintaining the country's competitiveness will necessarily require ensuring that all the best talent is involved in the development of artificial intelligence technology in Canada. Regardless of which of these fields you work in, it may be important to consult a patent agent early in the invention process, particularly to ensure optimal protection of your inventions and to maximize the benefits for Canadian institutions and businesses. Please do not hesitate to contact a member of our team!

    Read more
  • Artificial Intelligence and Telework: Security Measures to be Taken

    Cybersecurity will generally be a significant issue for businesses in the years to come. With teleworking, cloud computing and the advent of artificial intelligence, large amounts of data are likely to fall prey to hackers attracted by the personal information or trade secrets contained therein. From a legal standpoint, businesses have a duty to take reasonable steps to protect the personal information they hold.1 Although the legal framework doesn’t always specify what such reasonable means are in terms of technology, measures appropriate for the personal information in question must nevertheless be applied. These measures must also be assessed in light of the evolution of threats to IT systems. Some jurisdictions, such as Europe, go further and require that IT solutions incorporate security measures by design.2 In the United States, with respect to medical information, there are numerous guidelines on the technical means to be adopted to ensure that such information is kept secure.3 In addition to the personal information they hold, companies may also want to protect their trade secrets. These are often invaluable and their disclosure to competitors could cause them irreparable harm. No technology is immune. In a recent publication,4 the renowned Kaspersky firm warns us of the growing risks posed by certain organized hacker groups that may want to exploit the weaknesses of Linux operating systems, despite their reputation as highly secure. Kaspersky lists a number of known vulnerabilities that can be used for ransom attacks or to gain access to privileged information. The publication echoes the warnings issued by the FBI regarding the discovery of new malware targeting Linux.5 Measures to be taken to manage the risk It is thus important to take appropriate measures to reduce these risks. We recommended in particular that business directors and officers: Adopt corporate policies that prevent the installation of unsafe software by users; Adopt policies for the regular review and updating of IT security measures; Have penetration tests and audits conducted to check system security; Ensure that at least one person in management is responsible for IT security. Should an intrusion occur, or, as a precautionary measure for businesses that collect and store sensitive personal information, consulting a lawyer specializing in personal information or trade secrets is recommended in order to fully understand the legal issues involved in such matters.   See in particular: Act respecting the protection of personal information in the private sector (Quebec), s. 10, Personal Information Protection and Electronic Documents Act (Canada), s. 3. General Data Protection Regulation, art. 25. Security Rule, under the Health Insurance Portability and Accountability Act, 45 CFR Part 160, 164. https://securelist.com/an-overview-of-targeted-attacks-and-apts-on-linux/98440/ https://www.fbi.gov/news/pressrel/press-releases/nsa-and-fbi-expose-russian-previously-undisclosed-malware-drovorub-in-cybersecurity-advisory

    Read more
  • Improving Cybersecurity with Machine Learning and Artificial Intelligence

    New challenges The appearance of COVID-19 disrupted the operations of many companies. Some had to initiate work from home. Others were forced to quickly set up online services. This accelerated transition has made cybersecurity vitally important, particularly considering the personal information and trade secrets that might be accidentally disclosed. Cybersecurity risks can stem not only from hackers, but also from software configuration errors and negligent users. One of the best strategies for managing cybersecurity risks is to try to find weak spots in the system before an attack occurs, by conducting a penetration test, for example. This type of testing has really evolved over the past few years, going from targeted trial and error to larger and more systematic approaches. What machine learning can bring to companies Machine learning, and artificial intelligence in general, is able to simulate human behaviour and can therefore function as a hypothetical negligent user or hacker for testing purposes. As a result, penetration tests involving artificial intelligence can be a good deal more effective. One example of relatively simple machine learning is Arachni: open-source software that assesses the security of web applications. It is one of the tools in the Kali Linux distribution, which is well-known for its penetration testing. Arachni uses a variety of advanced techniques, but it can also be trained to be more effective at discovering attack vectors-vulnerabilities where the applications are the most exposed.1 Many other cybersecurity software programs now have similar learning capabilities. Artificial intelligence can go even further. Possible uses for artificial intelligence in the cybersecurity field include2: A faster reaction time during malware attacks More effective detection of phishing attempts A contextualized understanding of abnormal user behaviour IBM has recently created a document explaining how its QRadar suite, which incorporates artificial intelligence, can reduce managers’ cybersecurity burden.3 What it means: Human beings remain central to cybersecurity issues. Managers must not only understand those issues, including the ones created by artificial intelligence, but they must also give users clear directives and ensure compliance. When considering which cybersecurity measures to impose on users, it is important for IT managers to be aware of the legal concerns involved: Avoid overly intrusive or constant employee surveillance. It may be wise to consult a lawyer with experience in labour law to ensure that the cybersecurity measures are compatible with applicable laws. It is important to understand the legal ramifications of a data or security breach. Some personal information (such as medical data) is more sensitive, and the consequences of a security breach involving this type of information are more severe. It may be useful for those responsible for IT security to talk to a lawyer having experience in personal information laws. Finally, a company’s trade secrets sometimes require greater protective measures than other company information. It may be wise to include IT security measures in the company’s intellectual property strategy.   https://resources.infosecinstitute.com/web-application-testing-with-arachni/#gref https://www.zdnet.com/article/ai-is-changing-everything-about-cybersecurity-for-better-and-for-worse-heres-what-you-need-to-know/; https://towardsdatascience.com/cyber-security-ai-defined-explained-and-explored-79fd25c10bfa Beyond the Hype, AI in your SOC, published by IBM; see also: https://www.ibm.com/ca-en/marketplace/cognitive-security-analytics/resources

    Read more
  • The 2020-2021 Quebec Budget: New Measures to Promote Innovation!

    Quebec’s Minister of Finance tabled his budget for 2020-2021, titled Your Future, your Budget1, on March 10. Among the new measures introduced by the government, new tax incentives for innovation and the commercialization of Quebec intellectual property were announced. The incentive deduction for the commercialization of innovations: establishing the most competitive tax rate in North America The Quebec government is committed to promoting research and development (R&D) and accelerating the development of innovative products through a highly competitive tax environment. The incentive deduction for the commercialization of innovations (the “IDCI”) will allow businesses to benefit from a combined tax rate of 17% on eligible income. Businesses that have an establishment in Quebec, have incurred R&D expenses there and commercialize intellectual property (“IP”) in Quebec will have their revenues from the sale or rental of goods, services and royalties from such IP taxed in Quebec at an effective rate of 2%. IP covered by the IDCI includes software protected by copyrights, patents, certificates of supplementary protection for drugs and plant breeders’ rights. The IDCI also replaces the deduction for innovative companies as ofJanuary 1, 2021. Companies eligible for that deduction will be eligible for the IDCI. The synergy capital tax credit: investing in start-ups The synergy capital tax credit is designed to encourage businesses to invest in innovative SMBs with high growth potential, more commonly known as “start-ups.” A business corporation with a permanent establishment in Quebec that is not primarily engaged in financing or investing in businesses may receive a non-refundable tax credit equal to 30% of the value of its eligible investment, up to a maximum of $750,000 per year, for a total tax credit of $225,000 per year. An eligible investment is an equity participation that does not result in control of an eligible SMB, which the investing corporation deals with at arm’s length. An eligible SMB is a Canadian-controlled private corporation with a permanent establishment in Quebec, with paid-up capital of less than $15 million and gross income of less than $10 million, operating in one of the following sectors: Green technology; Information technology; Life sciences; Innovative manufacturing; Artificial intelligence. Corporations claiming the synergy capital tax credit will have to hold the shares of the eligible SMB for a minimum period of 5 years. Start-ups interested in obtaining the designation of eligible SMB will have to submit an application to Investissement Québec. The investment and innovation tax credit: Modernizing SMBs The investment and innovation tax credit (the “C3i”) is designed to encourage businesses in all sectors to invest in their modernization, particularly in digitization and the use of leading-edge technology. A credit of 10%, 15% or 20%, determined according to the economic vitality index of the area where the investments are made, will be applicable for the acquisition of: Manufacturing and processing equipment; Computer hardware; Management software packages. The C3i will apply to acquisitions made before January 1, 2025, and will be fully refundable for SMBs2. Businesses with total assets and gross income of $100 million or more will also have access to this credit, although it will not be refundable. Eligible expenses for the C3i will be amounts exceeding $5,000 for the acquisition of computer hardware or management software packages and amounts exceeding $12,500 for the acquisition of manufacturing and processing equipment. Businesses involved in the distribution of such hardware and software packages would certainly benefit from informing their customers that the acquisition of their products is potentially eligible for the C3i. Businesses located in resource regions and still benefiting from the tax credit to foster the acquisition of manufacturing and processing equipment introduced in 2008 will be able to choose to continue to benefit from this credit or claim the C3i. Conclusion Quebec’s tax landscape is full of opportunities for innovators and creators of leading-edge technology. We should also mention the enhancement of R&D tax credits that promote collaboration between private businesses and research institutions that contribute to the vitality of Quebec’s knowledge economy. If you are a company involved in R&D and IP commercialization in Quebec, the professionals of Lavery’s intellectual property and taxation teams will be able to support you throughout your projects.   Ministère des Finances, Budget 2020-2021, “Your Future, your Budget,” City of Québec, Government of Quebec The credit repayment rate decreases linearly based on an SMB’s total assets and gross income when they exceed $50 million but are less than $100 million.

    Read more
  • Development of a legal definition of artificial intelligence: different countries, different approaches

    As our society begins to embrace artificial intelligence, many governments are having to deal with public concern as well as the ongoing push to harness these technologies for the public good. The reflection is well underway in many countries, but with varying results. The Office of the Privacy Commissioner of Canada is currently consulting with experts to make recommendations to Parliament, the purpose being to determine whether specific privacy rules should apply to artificial intelligence. In particular, should Canada adopt a set of rules similar to European rules (GDPR)? Another question raised in the process is the possibility of adopting measures similar to those proposed in the Algorithmic Accountability Act of 2019 bill introduced to the U.S. Congress, which would give the U.S. Federal Trade Commission the power to force companies to assess risks related to discrimination and data security for AI systems. The Commission d’accès à l’information du Québec is also conducting similar consultations. The Americans, in their approach, appear to also be working on securing their country’s position in the AI market. On August 9, 2019, the National Institute of Standards and Technology (NIST) released a draft government action plan in response to a Presidential Executive Order. Entitled U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools1, the plan calls for the development of new robust technologies to make AI solutions more reliable and standardized norms for such technologies. Meanwhile, on November 21, 2019, the Congressional Research Service published an updated version of its report entitled Artificial Intelligence and National Security2. It presents a reflection on the military applications of artificial intelligence, and, in particular, on the fact that various combat devices have the capacity to carry out lethal attacks autonomously. It also looks at ways to counter deep fakes, specifically by developing technology to uncover what could become a means of disinformation. The idea is thus to bank on technological progress to thwart misused technology. In Europe, further to consultations completed in May 2019, the Expert Group on Liability and New Technologies published a report for the European Commission entitled Liability for Artificial Intelligence3, which looks into liability laws that apply to such technology.  The group points out that, except for matters involving personal information (GDPR) and motor vehicles, the liability laws of member states aren’t standardized throughout Europe. One of its recommendations is to standardize such liability laws. In its view, comparable risks should be covered by similar liability laws4. Earlier, in January 2019, the Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data published its Guidelines on Artificial Intelligence and Data Protection5,whichincludes recommendations to comply with human rights conventions not only for lawmakers, but for developers, manufacturers and service providers using such technology as well. Even with these different approaches, one fundamental question remains: If special rules are to be adopted, to which technologies should they be applied? This is one of the main questions that the Office of the Privacy Commissioner of Canada is posing. In other words, what is artificial intelligence? The term is not clearly defined from a technological standpoint. It covers a multitude of technologies with diverse characteristics and operating modes. This is the first issue that lawmakers will have to address if they wish to develop a legal framework specific to AI. The document of the European expert group mentioned above gives us some points to consider that we believe to be relevant. In the group’s view, when qualifying a technology, the following factors should be taken into consideration: Its complexity; Its opacity; Its openness to interaction with other technologies; Its degree of autonomy; The predictability of its results; The degree to which it is data-driven; Its vulnerability to cyber attacks and risks. These factors help to identify, on a case-by-case basis, the risks inherent to different technologies. In general, we think it preferable to not adopt a rigid set of standards that apply to all technologies. We rather suggest identifying legislative goals in terms of characteristics that may be found in many different technologies. For example, some deep learning technologies use personal information, while others require little or no such information. They can, in some cases, make decisions on their own, while in others, they will only help to do so. Finally, some technologies are relatively transparent and others more opaque, due in part to technological or commercial constraints. For developers, it becomes important to properly label a potential technology in order to measure the risks its commercialization involves. More specifically, it may be important to consult with legal experts from different backgrounds to ensure that the technology in question isn’t completely incompatible with applicable laws or soon to be adopted ones in the various jurisdictions where it is to be rolled out.   https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf https://fas.org/sgp/crs/natsec/R45178.pdf https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 Ibid, p. 36. https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8

    Read more
  • Intellectual property in open innovation and co-innovation in the field of artificial intelligence

    Moving far beyond the traditional models of closed innovation, artificial intelligence is progressing by means of collaborations and exchanges, both with the academic world and between companies. In Canada, the United States and Europe, innovation has evolved in ways that have changed the very design of research and development projects. In the world of information technology, closed innovation within one company is generally not sufficient, particularly for technologies using artificial intelligence. Distinguishing between collaborative innovation, open innovation and co-innovation In the field of information technology, collaborative innovation was the first model to replace closed innovation. In this type of innovation, an organization collaborates with various partners to build a value chain that it tries to organize and control. Apple is often cited as an example: it has some control over both the hardware (usually sold under its brand) and the software (third party software is made available through a virtual store that it controls). The most significant change in recent years has been the arrival of open innovation, in which several companies foster innovation both internally and externally1. Exchanges between companies are generally targeted to meet the needs of each company. Large companies, such as Samsung, enter into partnerships with start-up companies and assist them in their development. Collaborative innovation was therefore a precursor to open innovation. Indeed, the focus in collaborative innovation is on the company creating a new product or developing a new technology by means of the offerings of external parties. Open innovation, on the other hand, has a broader purpose and refers to all the means that can be used by a company to access new technology.2 Co-innovation3, or collective innovation, is the emerging model within the artificial intelligence community. It aims to promote an ecosystem that fosters innovation across several entities. Co-innovation can go hand in hand with respect for intellectual property. It is likely to4: Generate a continuous flow of ideas; Build a broad pool of knowledge, in particular through sharing data and analysis; Foster a culture of innovation through a shared vision and common objectives among partners; andCreate tacit convergence strategies between partners that are unique to them and difficult to replicate. This last point is particularly important for those who fear losing the benefits of their efforts. In this context of co-innovation, stakeholders create complex relationships between themselves, and each becomes difficult to replace. This is currently the case in artificial intelligence for some stakeholders who have developed specialized platforms that integrate into other companies' software. For example, as part of the integration of chatbots, the roles of the developers of these platforms, the companies offering conversation analysis tools, marketing firms and user companies all intersect. The implementation of APIs (application programming interface) between these players makes it possible to exchange information between them in a fairly fluid way, with each stakeholder playing a more important role in its own field of expertise. Protecting intellectual property in this context Open innovation and co-innovation are not incompatible with the notion of intellectual property. Strong intellectual property rights promote open innovation, according to the most recent studies5, as they protect members of the innovation community. Moreover, intellectual property can provide a way for stakeholders to coordinate6 and can even be a reason for a company to innovate in an open way. For example, where patents are possible7, they promote interaction between stakeholders during innovation because they ensure the innovation is protected and also disclosed. When the patent application is published, the other stakeholders obtain a fairly complete description of the technology, while at the same time becoming able to establish the identity of the party that holds the rights to it. The publication of the patent is therefore a form of knowledge exchange that also promotes alliances between stakeholders. Moreover, a potential licence would allow the company to earn revenue from a technology it has developed if it chooses not to exploit it itself. An example of this development in innovation comes from the academic world. Rather than simply licensing their technologies, universities now frequently offer technology transfer services and research partnerships.8 Some measures can be implemented to accelerate the development of artificial intelligence solutions: Adopt a design thinking approach, taking into consideration the fluid nature of innovation. Identify an ecosystem of partners, particularly keeping an eye on patents and published patent applications. Establish a flexible contractual framework for sharing data and allowing its use by partners. File patent applications, where possible. Facilitate the licensing of your technology to your partners. Implementing these measures requires agreements with various partners. It is important for your lawyers and patent agents to be involved in your company’s innovation process. In particular, they must ensure that the contracts to be entered into and the measures to protect intellectual property are in line with the desired approach to innovation.   Chesbrough, Henry William. Open innovation: The new imperative for creating and profiting from technology. Harvard Business Press, 2003. Gallaud D. (2013) "Collaborative Innovation and Open Innovation. " In: Carayannis E.G. (eds) Encyclopedia of Creativity, Invention, Innovation and Entrepreneurship. Springer, New York, NY Lee, Sang M. and Silvana Trimi. "Innovation for creating a smart future" Journal of Innovation & Knowledge 3.1 (2018): 1-8 Ibid. Da Silva, Mário APM. "Open innovation and IPRs: Mutually incompatible or complementary institutions?" Journal of Innovation & Knowledge 4.4 (2019): 248-252 Bortolami, Giovanni. "Risolvendo il paradosso dell'innovazione: come la protezione della proprietà intellettuale promuove l'innovazione aperta." (2018). Algorithms alone are usually not patentable, but several applications of artificial intelligence can be. See: https://www.lavery.ca/en/publications/our-publications/3167-artificial-intelligence-intellectual-property-cross-border-challenges-to-protect-personal-information-and-privacy.html. Nambisan, Satish, Donald Siegel and Martin Kenney. "On open innovation, platforms, and entrepreneurship." Strategic Entrepreneurship Journal 12.3 (2018): 354-368.

    Read more
  • Artificial intelligence: is your data well protected across borders?

    Cross-border deals are always challenging, but when related to AI technologies, such deas additionally involve substantial variations in terms of the rights granted in each jurisdiction. Looking at cross-border deals about Artificial Intelligence technologies therefore requires a careful analysis of these variations in order to properly assess the risks, but also to seize all available opportunities. Many AI technologies are based on neural networks and rely on large amounts of data to train the networks. The value of these technologies relies mostly on the ability to protect the intellectual property related to these technologies, which may lie, in some cases, in the innovative approach of such technology, in the work performed by the AI system itself and in the data required to train the system. Patents Given the pace of the developments in Artificial Intelligence, when a transaction is being negotiated, we are often working with patent applications, well before any patent is granted. That means we often have to assess whether or not these patent applications have any chance of being granted in different countries. Contrary to patent applications on more conventional technologies, in AI technologies one cannot take it for granted that an application that is acceptable in one country will lead to a patent in other countries. If we look at the US, the Alice1 decision of a few years ago had a major impact, resulting in many Artificial Intelligence applications being difficult to patent. Some issued AI-related patents have been declared invalid on the basis of this case. However, it is obvious from the patent applications that are now public that several large companies keep filing patent applications for AI-related technologies, and some of them are getting granted. Just across the border up north, in Canada, the situation is more nuanced. A few years ago, the courts said in the Amazon2 decision that computer implementations could be an essential element of a valid patent. We are still hoping for some specific decision on AI systems. In Europe, Article 52 of the European Patent Convention excludes "programs for computers". However, a patent may be granted if a “technical problem” is resolved by a non-obvious method3. There may be some limited potential for patents on Artificial Intelligence technologies there. The recently updated Guidelines for Examination of patent applications related to AI and machine learning), while warning that expressions such as "support vector machine", "reasoning engine" or "neural network" trigger a caution flag as typically referring to abstract models devoid of technical character, point out that applications of IA and ML do make technical contributions that are patentable, such as for example: The use of a neural network in a heart-monitoring apparatus for the purpose of identifying irregular heartbeats; or The classification of digital images, videos, audio or speech signals based on low-level features, such as for example edges or pixel attributes for images In contrast, classifying text documents solely based on their textual content is cited as not being regarded to be a technical purpose per se, but a linguistic one (T 1358/09). Classifying abstract data records or even "telecommunication network data records" without any indication of a technical use being made of the resulting classification is also given as an example of failing to be a technical purpose, even if the classification algorithm may be considered to have valuable mathematical properties such as robustness (T 1784/06). In Japan, according to examination guidelines, software-related patents can be granted for inventions “concretely realizing the information processing performed by the software by using hardware resources”4. It may be easier to get a patent on an AI system there. As you can appreciate, you may end up with variable results from country to country. Several industry giants, such as Google, Microsoft, IBM and Amazon keep filing applications for Artificial Intelligence and AI-related technologies. It remains to be seen how many, and which, will be granted, and ultimately which will be upheld in court. The best strategy for now may be to file applications for novel and non-obvious inventions with a sufficient level of technical detail and examples of concrete applications, in the event case law evolves such that Artificial Intelligence patents are indeed valid a few years down the road, at least in some countries. Judicial exceptions remain: Mathematical Concepts: mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviours; business relations); managing personal behaviour or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes: concepts performed in the human mind (including an observation, evaluation, judgment, opinion). Take-home message: patent applications on AI technology should identify a technical problem, provide a detailed technical description of specific implementations of the innovation that solve or mitigate the technical problem, and give examples of possible outcomes have a greater hope of getting allowed into a stronger patent. Setting the innovation within a specific industry or as related to specific circumstances and explaining the advantages over known existing systems and methods contributes to overcoming subject matter eligibility issues. Copyright From the copyright standpoint, we have also some difficulties, especially for the work created by an AI system. Copyright may protect original Artificial Intelligence software if it consists of “literary works” under the Copyright Act, including: computer source code, interface elements, a set of methods of communication for a database system, a web-based system, an operating system, or a software library. Copyright can cover data in a database if it complies with the definition of a compilation, thereby protecting the collection and assembling of data or other materials. There are two main difficulties in the recognition of copyright protection in AI creation: one relates to the machine-generated work that does not involve the input of human skill and judgment and the second concerns the concept of an author, which does not specifically exclude machine work but may eliminate it indirectly by way of section 5 of the Copyright Act, which indicates that copyright shall subsist in Canada in original work where the author was a citizen or resident of a treaty country at the time of creation of the work. Recently, we have seen Artificial Intelligence systems creating visual art and music. The artistic value of these creations may be disputed. However, the commercial value can be significant, for example if an AI creates the soundtrack to a movie. There are major research projects involving the use of AI technologies to write source code for some specific applications, for example in the gaming industry. Some jurisdictions do not provide copyright protection to work created by machines, like the US and Canada. In Canada, some recent case law specifically stated that for a work to be protected under the Copyright Act, you need a human author5. In the US, some may remember Naruto, the monkey that took a selfie. In the end, there was no copyright in the picture. While we are not sure how this will translate for Artificial Intelligence at this point, it is difficult to foresee that an AI system would have any such right if a monkey has none. Meanwhile, other countries, such as the UK, New Zealand and Ireland, have legal provisions whereby the programmer of the Artificial Intelligence technology will likely be the owner of the work created by the computer. These changes were not specifically made with AI in mind, but it is likely that the broad language that was used will apply. For example, in the UK, copyright is granted to “the person by whom the arrangements necessary for the creation of the work are undertaken”6. The work created by the system may have no protection at all in Canada, the US and several other jurisdictions, but be protected by copyrights in other places, at least until Canada and the US decide to address this issue by legislative changes. Trade secrets Trade secret protection covers any information that is secret and not part of the public domain. In order for it to remain confidential, a person must take measures, such as obtaining undertakings from third parties not to divulge the information. There are no time limits for this type of protection, and protection can be sought for machine-generated information. Data privacy Looking at data privacy, some legal scholars have mentioned that, if construed literally, the European GDPR are difficult to reconcile with some AI technologies. We just have to think about the right to erasure and the requirement for lawful processing (or lack of discrimination), which may be difficult to implement7. If we look into neural networks, they typically learn from datasets created by humans or by human training. Therefore, these networks often end up with the same bias as the persons who trained them, and sometimes with even more bias because what neural networks do is to find patterns. They may end up finding a pattern and optimizing a situation from a mathematical perspective while having some unacceptable racial or sexist bias, because they do not have “human” values. Furthermore, there are challenges when working on smaller datasets that allow reversing the “learning” process of the Artificial Intelligence, as it may lead to privacy leaks and trigger the right to remove specific data from the training of the neural network, which itself is technically difficult. One also has to take into account laws and regulations that are specific to some industries, for example HIIPA compliance in the US for health records, which includes privacy rules and technical safeguards8. Laws and regulations must be reconciled with local policies, such as those decided by government agencies and which need to be met in order to have access to some government data; for example, to access electronic health records in the Province of Quebec’s, where the authors are based. One of the challenges, in such cases, is to come up with practical solutions that comply with all applicable laws and regulations. In many cases, one will end up creating parallel systems if the technical requirements are not compatible from one country to another.   Alice Corp. v. CLS Bank International, 573 U.S., 134 S. Ct. 2347 (2014) Canada (Attorney General) v. Amazon.com, Inc., 2011 FCA 328 T 0469/03 (Clipboard formats VI/MICROSOFT) of 24.2.2006, European Patent Office, Boards of Appeal, 24 February 2006. Examination Guidelines for Invention for Specific Fields (Computer-Related Inventions), Japanese Patent Office, April 2005. Geophysical Service Incorporated v Encana Corporation, 2016 ABQB 230; 2017 ABCA 125; 2017 CanLII 80435 (SCC). Copyright, Designs and Patents Act, 1988, c. 48, § 9(3) (U.K.); see also Copyright Act 1994, § 5 (N.Z.); Copyright and Related Rights Act, 2000, Part I, § 2 (Act. No. 28/2000) (Irl.). General Data Protection Regulation, (EU) 2016/679, Art. 9 and 17. Health Insurance Portability and Accountability Act of 1996

    Read more
  • Open innovation: A shift to new intellectual property models?

    “The value of an idea lies in the using of it.” This was said by Thomas Edison, known as one of the most outstanding inventors of the last century. Though he fervently used intellectual property protections and filed more than 1,000 patents in his lifetime, Edison understood the importance of using his external contacts to foster innovation and pave the way for his inventions to yield their full potential. In particular, he worked with a network of experts to develop the first direct current electrical circuit, without which his light bulb invention would have been virtually useless. Open innovation refers to a mode of innovation that bucks the traditional research and development process, which normally takes place in secrecy within a company. A company that innovates openly will entrust part of the R&D processes for its products or services, or its research work, to external stakeholders, such as suppliers, customers, universities, competitors, etc. A more academic definition of open innovation, developed by Professor Henry Chesbrough at UC Berkeley, reads as follows: “Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively.”1 Possible approaches: collaboration vs. competition A company wishing to use open innovation will have to decide which innovation "ecosystem" to join: should it favour membership in a collaborative community or a competitive market?             Joining a collaborative community In this case, intellectual property protections are limited and the object is more focused on developing knowledge through sharing. Many IT companies or consortia of universities join together in collaborative groups to develop skills and knowledge with a view to pursuing a common research goal.             Joining a competitive market In this case, intellectual property protections are robust and there is hardly any exchange of information. The ultimate goal is profit maximization. Unlike the collaborative approach, relationships translate into exclusivity agreements, technology sales and licensing.  This competitive approach is particularly pervasive in the field of video games, for example. Ownership of intellectual property rights as a requisite condition to use open innovation The success of open innovation lies primarily in the notion that sharing knowledge can be profitable. Secondly, a company has to strike a balance between what it can reveal to those involved (suppliers, competitors, specialized third-party companies, the public, etc.) and what it can gain from its relationships with them. It also has to anticipate its partners’ actions in order to control its risks before engaging in information sharing. At first glance, resorting to open innovation may seem to be an imprudent use of intellectual property assets. Intellectual property rights generally involve a monopoly attributed to the owner, allowing it to prevent third parties from copying the protected technology. However, studies have shown that the imitation of a technology by a competitor can be beneficial.2 Other research has also shown that a market with strong intellectual property protections increases the momentum of technological advances.3 Ownership of intellectual property rights is therefore a prerequisite for any company that innovates or wants to innovate openly. Because open innovation methods bring companies to rethink their R&D strategies, they also have to manage their intellectual property portfolios differently. However, a company has to keep in mind that it must properly manage its relations with the various external stakeholders it plans to do business with in order to avoid unwanted distribution of confidential information relating to its intellectual property, and, in turn, profit from this innovation method without giving up its rights. Where does one get innovation? In an open innovation approach, intellectual property can be brought into a company from an external source, or the transfer can occur the other way around. In the first scenario, a company will reduce its control over its research and development process and go elsewhere for intellectual property or expertise that it does not have in-house. In such a case, the product innovation process can be considerably accelerated by the contributions made by external partners, and can result in: The integration of technologies from specialized third-party partners into the product under development; The forging of strategic partnerships; The granting of licences to use a technology belonging to a third-party competitor or supplier to the company; The search for external ideas (research partnerships, consortia, idea competitions, etc.). In the second scenario, a company will make its intellectual property available to stakeholders in its external environment, particularly through licensing agreements with strategic partners or secondary market players. In this case, a company can even go so far as to make one of its technologies public, for example by publishing the code of software under an open-source license, or even assign its intellectual property rights for a technology that it owns, but for which it has no use. Some examples Examples of open innovation success stories are many. For example, Google made its automated learning tool Tensorflow available to the public under an open-source license (Apache 2.0) in 2015. As a result, Google allowed third-party developers to use and modify its technology’s code under the terms of the license while controlling the risk: any interesting discovery made externally could quickly be turned into a product by Google. This strategy, common in the IT field, has made it possible for the market to benefit from interesting technology and Google to position itself as a major player in the field of artificial intelligence. The example of SoftSoap liquid soap illustrates the ingenuity of American entrepreneur Robert Taylor, who developed and marketed his product without strong intellectual property protection by relying on external suppliers. In 1978, Taylor was the first to think of bottling liquid soap. In order for his invention to be feasible, he had to purchase plastic pumps from external manufacturers because his company had no expertise in manufacturing this component. These pumps were indispensable, because they had to be screwed onto the bottles to pump the soap. At that time, the patent on liquid soap had already been filed and Mr. Taylor’s invention could not be patented. To prevent his competitors from copying his invention, Taylor placed a $12 million order with the two sole plastic pump manufacturers. This had the effect of saturating the market for nearly 18 months, giving Mr. Taylor an edge over his competitors who were then unable to compete because of the lack of availability of soap pumps from manufacturers. ARM processors are a good example of the use of open innovation in a context of maximizing intellectual property. ARM Ltd. benefited from reduced control over the development and manufacturing process of tech giants such as Samsung and Apple, which are increasingly integrating externally developed technologies into their products. The particularity of ARM processors lies in their marketing method: ARM Ltd. does not sell its processors as finished processors fused in silicon. Rather, it grants licenses to independent manufacturers for them to use the architecture it has developed. This makes ARM Ltd. different from other processor manufacturers and has allowed it to gain a foothold in the IT parts supplier market, offering a highly flexible technology that can be adapted to various needs depending on the type of product (phone, tablet, calculator, etc.) in which the processor will be integrated. Conclusion The use of open innovation can help a company significantly accelerate its research and development process while limiting costs, either by using the intellectual property of others or sharing its own intellectual property. Although there is no magic formula, it is certain that to succeed in an open innovation process, a company must have a clear understanding of the competitors and partners it plans to collaborate with and manage its relations with its partners accordingly, so as to not jeopardize its intellectual property.   Henry Chesbrough, Win Vanhaverbeke and Joel West, Open Innovation: Researching a New Paradigm, Oxford University Press, 2006, p. 1 Silvana Krasteva, "Imperfect Patent Protection and Innovation," Department of Economics, Texas A&M University, December 23, 2012. Jennifer F. Reinganum, "A Dynamic Game of R and D: Patent Protection and Competitive Behavior,” Econometrica, The Econometric Society, Vol. 50, No. 3, May, 1982; Ryo Horii and Tatsuro Iwaisako, “Economic Growth with Imperfect Protection of Intellectual Property Rights,” Discussion Papers In Economics And Business, Graduate School of Economics and Osaka School of International Public Policy (OSIPP), Osaka University, Toyonaka, Osaka 560-0043, Japan.  

    Read more
  • Artificial intelligence at the lawyer’s service: is the dawn of the robot lawyer upon us?

    Over the past few months, our Legal Lab on Artificial Intelligence (L3AI) team has tested a number of legal solutions that incorporate AI to a greater or lesser extent. According to the authors Remus and Levy1, most of these tools will have a moderate potential impact on the legal practice. Among the solutions tested by the members of our laboratory, certain functionalities in particular drew our attention.  Historic context At the start of the 1950s, when Grace Murray Hopper, a pioneer of computer science, attempted to convince her colleagues to create a computer language using English words, she was told that it was impossible for a computer to be able to understand English. However, contrary to the engineers and mathematicians of the time, the business world was more receptive to the idea. Thus was born “Business Language version 0”, or B-0, the forerunner of a number of more modern computer languages and a first (small) step towards the processing of natural language. The fact remains that the use of IT for legal solutions was a challenge, specifically because of the nature of the information to be processed, which was often presented in text format and was not very organized. In 1986, author Richard Susskind was already addressing the use of artificial intelligence to process legal information2. It was not until recently, however, with advances in the natural language processing field, that we have seen the creation of software applications with the potential to substantially modify the practice of law. A number of lawyers and notaries are now concerned about the future of their profession. Are we witnessing the creation of the robot lawyer? Currently, the technological solutions available to legal practitioners make it possible to automate certain specific aspects related to the multitude of tasks they fulfill when they are doing their work. The tools for automating and analyzing documents are relevant examples in that they make it possible, on the one hand, to create legal documents from an existing model and, on the other, to identify certain elements that may be potentially problematic in the submitted documents.  However, no solution can claim to completely replace the legal practitioner. Recently, the above-mentioned authors Remus and Levy have analyzed and measured the impact of automation on the work of lawyers3. Generally speaking, they predict that only the document research process will be disrupted significantly by automation and that the tasks of managing files, drafting documents, conducting due diligence reviews and research and legal analysis will be slightly impacted. Moreover, they feel that the tasks of document management, legal drafting, consulting, negotiating, collating facts, preparation and representation before the court will only be slightly impacted by solutions integrating artificial intelligence4. Documentary analysis toolsKira, Diligen, Luminance, Contract Companion, LegalSifter, LawGeex, etc. First, among the tools making it possible to conduct documentary analysis, there are two types of solutions offered on the market. On the one hand, several use supervised and unsupervised learning techniques to sort and analyze a vast number of documents in order to draw certain specific information from them. This type of tool is particularly interesting in the context of a due diligence review. It makes it possible in particular to identify the object of a given contract as well as certain clauses, the applicable laws and other set items in order to detect certain elements of risk determined beforehand by the user. In this case, we could for example cite the existence of due diligence tools such as Kira, Diligen and Luminance5. On the other hand, certain solutions are designed to analyze and review contracts to facilitate negotiations with a third party. This type of tool uses natural language processing (NLP) in order to identify the specific terms and clauses of a contract. It also identifies the missing elements in a specific type of contract. For example, in a confidentiality agreement, the tool will notify the user if the concept of confidential information is not defined. Moreover, it provides comments regarding the various elements identified in order to provide guidance on negotiating the terms of the contract. These comments and guidelines can be modified based on the attorney’s preferred practices. These solutions are particularly useful when a legal professional is called on to advise a client on whether or not to comply with the terms of a contract tabled by a third party. The Contract Companion6 tool drew our attention because of the ease of use it provides, even if it is a tool that merely serves to assist a human drafting a contract without identifying problematic clauses and their content. Instead, it detects inconsistencies such as a missing definition for a capitalized term, among other examples. LegalSifter and LawGeex7 are presented as assistants to the negotiation process by proposing solutions that identify discrepancies between a submitted contract and the best practices favoured by the firm or company, thereby helping to outline and resolve any missing or problematic clauses. Legal research tools InnovationQ, NLPatent, etc. Recently, certain solutions that made it possible to conduct legal research and predict the outcome of court decisions have appeared on the market. Some companies propose simulating a ruling based on factual elements outlined in the context of a given legal system to help with the decision-making process. Accordingly, they make use of NLP to understand the questions asked by attorneys and to research the legislation, case law and doctrinal sources. Some of the solutions even make proposals to lawyers to determine their chances of winning or losing based on the given elements, such as the opposing party’s lawyer, the judge and the administrative level of the court. To do so, the tool uses machine learning. It asks questions about the client’s situation and then goes on to analyze thousands of similar cases upon which the courts have already passed judgment. Lastly, the artificial intelligence system formulates a prediction based on all of the cases analyzed, a personalized explanation and a list of relevant case law. With the advent of these tools, authors are anticipating significant changes in the types of lawsuits that will be brought before the courts. They predict that technology will enable the settlement of disputes and that judges will only have to rule on matters that give rise to the most complex of legal questions and that require concrete legal developments.8 In patent law, the search for existing inventions (“prior art” in the intellectual property lexicon) is facilitated by tools that call on NLP. Patent application drafting usually comprises a specialized vocabulary. The solutions make it possible to identify the target technology, determine the relevant prior art and analyze the related documents so as to identify the disclosed elements. In this regard, the InnovationQ and NLPatent9 tools seem to demonstrate interesting potential. Legal drafting toolsSpecif.io, etc. Some of the solutions available on the market call on the “creative” potential of artificial intelligence applied to the legal field. Among these, we are interested in a solution that is capable of drafting a specification in the context of a patent application. The Specif.io10 tool makes it possible to draft a description of the invention using vocabulary suited to the form required to draft patent applications, which is based on claims that briefly outline the scope of the invention.  For the time being, this solution is restricted to the field of software developments. Even if, most of the time and given the current stage of the product, the lawyer is called on to rework the text significantly, he or she can save a considerable amount of time when composing a first draft. Recommendations In conclusion, artificial intelligence tools are not all progressing in the same manner in every area of the law. A number of tools can already assist attorneys with various repetitive tasks or help them identify errors or potential risks in different documents. However, it is important to consider that such tools are still far off from having the human faculty of being able to contextualize their operations. In those cases where the information is organized and structured, such as in matters pertaining to patent applications, a domain in which databases are organized and accessible online for most Western nations, the automated tools make it possible to not only assist users in completing their tasks, but even to provide a first draft of a specification based on simple draft claims. However, research and development are still needed in this regard before we can truly rely on such solutions. Therefore, we feel it relevant to issue certain key recommendations to those attorneys seeking to integrate such AI tools into their everyday practice: Be aware of the possibilities and limits of an AI tool: when selecting an AI tool, it is important to run tests on it so as to assess its operational aspects and results. One must set a specific objective and ensure that the tool being tested can help achieve this objective. Human supervisions: to date, it is important for any AI tool to still be used with human supervision. This is not only an ethical obligation to ensure the quality of the services rendered, but also a simple rule of caution when using tools that do not have the capacity to contextualize the information submitted to them. Processing of ambiguities: several AI tools make it possible to vary their operational settings. Such setting variations make it so that the processing of any ambiguous situation is entrusted to the humans operating the AI tools. Data confidentiality: Remember that we are bound to uphold the confidentiality of the data being processed! The processing of confidential information by solutions providers is a critical challenge to consider. We should not be afraid to ask questions on this subject. Informed employees: Too often, artificial intelligence tends to frighten employees. Moreover, just as with any technological change, internal training is needed to ensure that the use of such tools complies with the company’s requirements. Thus, not only must the proper AI tools be selected, but the proper training must be provided in order to benefit from them.   Remus, D., & Levy, F. (2017). Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law. Geo. J. Legal Ethics,30, 501. Susskind, R.E. (1986) Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning. The Modern Law Review, 49(2), 168-194. Supra, note 1. Id. kirasystems.com; diligen.com; luminance.com. https://www.litera.com/products/legal/contract-companion. legalsifter.com;lawgeex.com. Luis Millan, Artificial Intelligence, Canadian Lawyer (April 7, 2017), online: http://www.canadianlawyermag.com/author/sandra-shutt/artificial-intelligence-3585. http://ip.com/solutions/innovationq/; nlpatent.com. specif.io/index.

    Read more
  • Dr. Robot at your service: artificial intelligence in healthcare

    Artificial intelligence technologies are extremely promising in healthcare.1 By examining, cross-referencing and comparing a phenomenal amount of data,2 AI lets researchers work more quickly at a lower cost3 and facilitates doctors’ decision-making with regard to diagnosis, treatment and choice of prescription. The integration of AI into the healthcare field can take various forms:4 Management of electronic medical records (e.g., Omnimed) Direct patient care to improve decision-making with regard to diagnosis, prognosis and choice of treatment method Integration in the area of monitoring and medication (e.g., Dispill) The performance of robotic exams and surgeries Indirect patient care functions, such as: Optimization of workflow Better management of hospital inventory Home care applications, where portable devices and sensors would be used to assess and predict patient needs. Working to protect innovators, their clients and the public No matter what form AI takes when it is implemented into the healthcare field in Quebec, as with any innovation, we must adapt and work to protect the public, innovators and their clients. What is an innovator? An innovator is a developer, provider or distributor who is involved in the development and marketing of products that use artificial intelligence. 1 - Innovator protection As the future of healthcare lies in an increased integration of AI, innovators must be properly supported and protected, which means that they must be equipped with all of the appropriate tools for protecting their rights, especially intellectual property rights. At the time of product development: they must make sure that they obtain the necessary guarantees and commitments from their partners in order to be able to assert their rights in the event that their technology is appropriated by a third party.  At the time of product marketing: having taken care to properly protect their rights, they will avoid prosecution or claims, whether for patent infringement or otherwise. In addition, if the proposed technological solution implies that the data collected, transmitted or analyzed is stored and pooled or that it is shared with other stakeholders, innovators must ensure in particular that the patients’ personal information is protected in accordance with the applicable laws and regulations5 and that this data is not used for commercial purposes. If not, an innovator could be the target of a claim by professional organizations or by patient groups and, when certification is required, that certification could be withdrawn by the Ministère de la Santé et des Services sociaux [health and human services ministry]. To learn more about innovator protection, we invite you to read the following article: Artificial intelligence: contractual obligations beyond the buzzwords. 2 - Protection of clients (buyers of artificial intelligence solutions) Artificial intelligence operations have several intrinsic limits, including the prioritization of quantity over quality of the data collected; systematic errors that are reproduced or amplified;6 and even human error in the entry of the data relied on by professionals and researchers. Accordingly, innovators must ensure that they properly warn their clients of the limits and risks tied to the use of their products in order to protect themselves against potential claims. They must therefore be objective in the way that they represent their products. For example, terms like “intelligent database” should be used rather than “diagnostic systems.” This word choice will avoid both potential civil liability claims and the possibility of being reprimanded for violating the Medical Act for performing functions reserved only for doctors.7 The innovator will also be required to enter into a contract with the client that is clear and detailed with regard to the use, access and sharing of data collected in electronic medical records (EMR). 3 - Protection of the public (Collège des médecins du Québec [“Quebec college of physicians”] regulation) All products using AI technology must allow doctors to respect their obligations with regard to creating and maintaining EMR. These obligations are included in Section 9 of the Collège des médecins draft regulation, which is expected to come into force in the near future and will make the use of EMR mandatory. The Collège also intends to specify in this regulation that collected data may not be used [TRANSLATION] “for any purpose other than to monitor and treat patients.”8 The Inquiries Division of the Collège has also recently cautioned its members that the technological tools that they use [TRANSLATION] “must be used exclusively within the context of their duties, meaning the administration of care.”9 The current position of the Collège des médecins and the Ministère de la Santé is that the marketing of data contained in EMR is prohibited even if the data is anonymous. Furthermore, according to Dr. Yves Robert, Secretary of the Collège, even if the shared data is anonymous, it may not be used either to promote a product, such as a less expensive medication in the case of an insurance company, or to influence a doctor’s choice when making a decision. 10 The Inquiries Division has also reminded members of their ethical obligation to “disregard any intervention by a third party which could influence the performance of their professional duties to the detriment of their patient, a group of individuals or a population.11”   The use of Big Data would create more than $300 billion USD in value, with two-thirds of that amount coming from reduced expenditures.  Big Data Analytics in Healthcare, BioMed Research International, vol. 2015, Article ID 370194; see also Top health industry issues of 2018, PwC Health Research Institute, p. 29. The American consortium Kaiser Permanente holds around 30 petabytes of data, or 30 million gigabytes, and collects 2 terabytes daily. Mining Electronic Records for Revealing Health Data, New York Times, January 14, 2013. For examples of the integration of AI in healthcare in Canada, see Challenge Ahead: Integrating Robotics, Artificial Intelligence and 3D Printing Technologies into Canada’s Healthcare Systems , October 2017. See in particular S. 20 of the Code of ethics of physicians, CQLR c. M-9, r. 17 and the Act respecting the protection of personal information in the private sector, CQLR c P-39. See When artificial intelligence is discriminatory. Medical Act, CQLR c. M-9, s. 31. Id., S. 9, par. 9. L’accès au dossier médical électronique : exclusivement pour un usage professionnel [“Access to medical records: exclusively for professional use”], Inquiries Division of the Collège des médecins du Québec, February 13, 2018. Marie-Claude Malboeuf, “Dossiers médicaux à vendre ” [“Medical records for sale”], La Presse.ca, March 2, 2018. Accès au dossier médical électronique par les fournisseurs [“Access to electronic medical records by providers”], Inquiries Division of the Collège des médecins du Québec, May 29, 2017, citing section 64 of the Code of Ethics of Physicians, supra, note 12.

    Read more
  • Ars Ex Machina: Artificial Intelligence, the artist

    Similarly to human beings, machines are now capable of creating. They can write poetry, compose symphonies and even paint canvasses. They can also take photographs without any human assistance and perform musical pieces with flexibility and expression. On the technical front, such works and performances are successful to the point of confusing numerous aficionados, who are unable to tell the difference between a work created by humans and something generated by their artificial counterparts. However, with regards to artistic merit, the quality of the artificially-generated work is often criticized. For legal experts, the question arises as to whether these works meet all of the criteria for recognition of copyright. The matter of copyright in Canada Copyright law is the exclusive right to produce, reproduce, sell, licence, publish or perform a work or a major part thereof, whether it be literary, artistic, dramatic or musical1. In Canadian law, to be subject to copyright law, a work must be qualified as being an original creation; it must be the product of an author’s exercise of skill and judgement2. Even though it is difficult to confirm whether a computer can demonstrate skill and judgement, the definition proposed by the Supreme Court clarifies the two aptitudes that well describe the task performed by the computer when it creates works of art. Incidentally, creative nature is never considered as part of the concept of originality: a work needs to be neither novel nor unique. Process of Artistic Creation of an Intelligent System Any creation made by an artificial intelligence system draws its origin from one or more algorithms, that is to say, a series of mathematical operations is performed in order to obtain a result. Such work may be qualified as being new, provided that it does not reproduce an existing work. However, it often puts forth a mechanical nature that hinders its assimilation into being considered a real work of art. Works generated by a computer in an autonomous manner are usually less eclectic than those generated by their human counterparts3. A system can, for instance, after having been exposed to a vast quantity of Mozart’s symphonies and having acquired the necessary musical theory, generate musical works similar to those of Mozart. Even if they may be criticized from an artistic innovation standpoint, such works meet the originality criteria in the legal sense since they appeal to a certain acquired aptitude (talent) and to evaluation of various possible options (judgment). Composing a poem in the style of Verlaine or a Beethoven-like symphony may ultimately lead, according to these criteria, to the recognition of a copyright. The Performer Robot The Copyright Act4 also provides protection on the performers’ rights in their performance of a given work5. For a number of years now, computer programs have been able to “play” musical pieces autonomously. Recently, the quality of the performances of these programs has improved considerably and they demonstrate a subtlety and flexibility that was previously lacking. For example, the Swiss firm ABB developed YuMi, the robot orchestra conductor capable of conducting an orchestra of human musicians and following the vocalises of a solo tenor6. Closer to us, the interactive virtual singer Maya Kodes was created by Neweb.tv, a Montréal-based firm. On stage, Maya sings and interacts with a group of back-up musicians and dancers7. This presents a plethora of advantages for film producers, impresarios, video game creators or advertisers who, thanks to such technological innovations, may henceforth generate original scores after having selected certain parameters, such as genre, ambience and duration, without having to pay for the licence to the rights held by various copyright holders to this music such as the composer, the creator and the performer. Who holds the copyrights? Elsewhere in the world The U.S. Copyright Office has issued a specific set of regulations requiring that copyright holders be human beings8. Works produced by a machine or another mechanical process that operates in a random or automatic manner are not, according to these regulations, eligible to be covered by copyright without there being creative involvement from a human being9. Thus, it appears that these provisions give rise to a grey zone, since the law has not been adjusted accordingly. Some jurisdictions, such as Australia10, have established that copyright law is closely related to a human being. Others have created a legal fiction whereby the creator of the computer program is considered as being the copyright holder. This is true in the United Kingdom, Ireland and New Zealand11. The latter solution is subject of criticism whereby the proposed legal fiction makes light of the legal complexities related to creating a computer program. In fact, the distance between the author of the program and the work ultimately created may prove significant12. It is possible that an artificial intelligence program creates something that is completely unexpected and undesired by the person who developed the program13. The humans behind the artificial intelligence system are not themselves the authors of the underlying message of the literary work or the melody resulting from the music composed. In Canada In the United States, an author proposes that the work produced by a machine be considered as a work produced by an employee hired to create or perform works that fall within the scope of the United States Copyright Act14. The concept of the work made for hire also exists in the Copyright Act in Canada, with certain technical nuances15. According to this idea, the programmer or person who orders the work of the programmer he or she employs becomes the holder of the economic rights tied to the work, that is to say, the rights related to marketing the work. This solution evacuates the notion of moral rights, that is, the right, for the author, to preserve the integrity of his or her work as well as the right to invoke, even under a pseudonym, creation of the work or even the right to remain anonymous16. Since these rights cannot be assigned, it is difficult to foresee that the solution proposed by the purported author be viable under Canadian law. In conclusion, the introduction of a new legal regime adapted to artistic creations produced by artificial intelligence systems is perceived as being necessary by many with respect to the works and the copyrights therein. For the time being, since the matter has yet to go before the courts, the foreseeable solutions are divided into two camps. On the one hand, we can recognize the copyright of the person who created the artificial intelligence that produced the work. On the other hand, if we do not succeed in binding the copyright to neither the programmer nor the machine, there is a risk that the work will fall into the public domain and thereby lose its economic value. One thing is certain: the desired legal regime must consider the rights of programmers behind the system with respect to the work ultimately produced and the level of control that such individuals may have over the content subsequently produced. Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries.   The Copyright Act, R.S.C. 1985, c. C-42, articles 3, 15, 18 The Supreme Court defines talent as “the use of one’s knowledge, developed aptitude or practised ability in producing the work.” It describes judgment as “one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options in producing the work”. CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 SCC 13 Bridy, A. (2012) Coding creativity: copyright of the artificially intelligent author. Stan. Tech. L. Rev., 1. RSC 1985, c C-42 Id., art. 15. YuMi the robot conducts Verdi with Italian Orchestra, Reuters, September 13, 2017, https://www.reuters.com/article/us-italy-concert-robot/yumi-the-robot-conducts-verdi-with-italian-orchestra-idUSKCN1BO0V2. Kirstin Falcao, Montreal developers create 1st interactive holographic pop star, CBC News, November 2, 2016, http://www.cbc.ca/news/canada/montreal/maya-kodes-virtual-singer-1.3833750. U.S. Copyright Office, Compendium of U.S. Copyright Office Practices306 (3d ed. 2017). Id., 313.2 Acohs Pty Ltd v Ucorp Pty Ltd (2012) FCAFC 16. Copyright, Designs and Patents Act, 1988 c. 48 9(3) (.U.K.); Copyright Act 1994, 5 (N.Z.); Copyright and Related Rights Act, 2000, Part I, 2 (Act. No. 28/2000). Supra, note 3. Wagner, J. (2017). Rise of the Artificial Intelligence Author, The Advocate, 75, 527. Supra, note 3. Article 13(3) of the Copyright Act establishes this specific legal regime and distinguishes between an employment contract and a contract related to a journalistic contribution. Supra, note 4, art. 14.1(1).

    Read more
  • Artificial Intelligence and blockchains are vulnerable to cyberattacks

    Technologies based on blockchains and AI imply a considerable change for our society. Being that the security of data exchanged is vital, companies must begin adopting a long-term approach right now. Many businesses develop services based on blockchains, in particular in the financial services sector. Cryptocurrencies, one example of blockchain use, transform the way in which some monetary transactions are made, far from the oversight of financial institutions and governments. With regard to AI, businesses sometimes choose technological platforms involving data sharing in order to accelerate the development of their AI tool. Quantum revolution’s impact on cybersecurity In 2016, IBM made a computer for testing several quantum algorithms, available to researchers.1 Quantum computers work in a radically different way from traditional computers. Some ten years in the future, they will be able to perform quick calculations that exceed the capacity of today’s most powerful computers. Indeed, quantum computers use the quantum properties of matter, in particular the superposition of states, to simultaneously process linked data sets. Shor’s algorithm uses the quantum properties of matter and can be used by quantum computers. Shor’s algorithm enables a quantum computer to factor a whole number very quickly, much more so than any traditional computer. This mathematical operation is the key element to decipher information that has been encrypted by several commonplace computing methods. The technology, which physicists have long been studying, now constitutes a major risk for the security of encrypted data. Data meant to remain safe and confidential are thus vulnerable to being misappropriated for unauthorized uses. Are blockchain encrypting methods sufficiently secure? There are several encrypting methods available today, with several of them needing to be strengthened to preserve data security. And these are but a few examples of vulnerability to quantum computers. SHA-2 and SHA-3 methods The US National Institute of Standards and Technology (NIST) has issued recommendations for the security of various encrypting methods.2 The SHA-2 and SHA-3 methods, namely the algorithms that ensure the integrity of blockchains by producing a “hash” of previous blocks, need to be strengthened to maintain current security levels. Signature methods used by Bitcoin and other cryptocurrencies Elliptic curve cryptography is a set of cryptography techniques using one or more properties of mathematical functions that describe elliptic curves in order to encrypt data. According to the NIST, elliptic curve cryptography will become ineffective. Worryingly, we are talking about the method used for the signature of cryptocurrencies, including the famous Bitcoin. Recent studies indicate that this method is highly vulnerable to attack by quantum computers, which, in a few years’ time, could crack these codes in under 10 minutes.3 RSA-type cryptographic algorithms RSA-type cryptographic algorithms,4 which are widely used to forward data over the Internet, are particularly vulnerable to quantum computers. This could have an impact in particular when large quantities of data need to be exchanged among several computers, for example to feed AI systems. More secure cryptographic algorithms The NIST had indicated some approaches that are more secure. An algorithm developed by Robert McEliece, mathematician and professor at Caltech, seems to be able to resist such attacks5 for now. For the longer term, we can hope that quantum technology itself makes it possible to generate secure keys. Legal and business implications of data protection  Companies are required by law to protect the personal and confidential data entrusted to them by their customers. They must therefore take suitable measures to protect this valuable data. Therefore, companies must choose an AI or blockchain technology as soon as possible, while taking into account the fact that, once adopted, the technology will be used for several years and may need to survive the arrival of quantum computers. What is more, we will need to fix the security flaws of technologies that are not under the control of government authorities or of a single company. Unlike the solution with more traditional technologies, we cannot install a simple update on a single server. In some cases, it will be necessary to reconsider the very structure of a decentralized technology such as blockchain. Choosing an evolving technology The key will therefore be to choose a technology enabling businesses to meet their security obligations in a post-quantum world, or at least to choose an architecture that will enable such encrypting algorithms to be updated in a timely manner. It will therefore be necessary to establish a dialogue among computer scientists, mathematicians, physicists and…lawyers! Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal specifics, particularly the various branches and applications of artificial intelligence which will rapidly be appearing in all companies and industries.   Press Release: IBM Makes Quantum Computing Available on IBM Cloud to Accelerate Innovation: https://www-03.ibm.com/press/us/en/pressrelease/49661.wss; see also: Linke, Norbert M., et al. “Experimental comparison of two quantum computing architectures.” Proceedings of the National Academy of Sciences (2017): 201618020. Chen, Lily, et al. Report on post-quantum cryptography. US Department of Commerce, National Institute of Standards and Technology, 2016. Aggarwal, Divesh, et al. “Quantum attacks on Bitcoin, and how to protect against them.” arXiv preprint arXiv:1710.10377(2017). This acronym comes from Rivest, Shamir, and Adleman, the three developers of this kind of encryption. Supra, note 2; see also Dinh, Hang, Cristopher Moore, and Alexander Russell. “McEliece and Niederreiter cryptosystems that resist quantum Fourier sampling attacks.” Annual Cryptology Conference. Springer, Berlin, Heidelberg, 2011.

    Read more
  • Standing Senate Committee of Canada's Transport and Communications issues report on driving of smart vehicles

    Introduction In January 2018, the Senate's Standing Committee on Transport and Communications (hereinafter the "Committee"), chaired by the Hon. David Tkachuk, published a report on the impact of automated vehicles in the country at the behest of the Minister of Transport of Canada. The first generation of these vehicles are already travelling on our roads, and their increased use will probably have far-reaching social consequences, such as a reduction in the number of accidents 1 and greater transport freedom for the elderly, but also, potentially, the loss of jobs in the country. The Committee issued sixteen (16) recommendations relating to smart vehicles2, in particular on these vehicles' cybersecurity and insurance coverage, urging the government to act now, since "technology will overtake regulations". Automobile manufacturers seem to hold the same opinion. Shawn Stephens, Planning and Strategy Director at BMW Canada, says that "the technology is ready. The manufacturers are ready. It is the laws and the government that are slowing us down [our translation]"3. Plug-in vehicles and automated vehicles Plug-in vehicles are described by the Committee as relying on to two kinds of technologies: the ones designed for “infoentertainement” and the ones relating to communication between vehicles.  These plug-in vehicles can therefore receive information on approaching vehicles, for example on their speed, relevant routes, and on the services available along the selected route. For their part, automated vehicles make different degrees of autonomous driving possible by relying on various technologies. The automation of these vehicles is classified between levels 0 and 5, that is, from no automation at all to complete automation, which refers to a vehicle that is entirely self-driven, without any possibility of human input.4 The smart cars designation encompasses both these categories. Cybersecurity The Committee recommends that a best practices guide be adopted with regard to cybersecurity. Indeed, the threat of cyberattacks targeting smart cars has been worrying the automobile industry for some years, to such an extent that the Automotive Information Sharing and Analysis Centre was established in July 2015, to allow various manufacturers to share their knowledge and cooperate on this topic. A cyberattack against a smart vehicle could target the integrity of its electronic data, and therefore the safety of its passengers, as well as the personal information of the drivers obtained from the vehicle. As a matter of fact, a recommendation for drafting a bill aimed at protecting the personal data of smart vehicles' users was also issued.  Insurance Considering the real threat of cyberattacks targeting smart vehicles, manufacturers must to take out an insurance policy covering cyberattacks. On another note, KPMG deems that, as a result of the use of automated vehicles, accidents will drop by 35% to 40%, while repair costs will increase by 25% to 30%5. So, one can reasonably expect an impact on drivers' insurance premiums. Moreover, it is possible that the liability in an accident involving an automated vehicle be transferred from the vehicle's driver to its manufacturer by means of amendments to the Automobile Insurance Act6, or of new laws specifically relating to the driving of automated vehicles. These changes could have significant consequences on the various laws regulating automobile insurance in the country7. The Committee therefore issued the recommendation for Transport Canada to oversee the impact of plug-in and automated vehicles on the automobile insurance industry.  Some initiatives and challenges The Motor Vehicle Test Centre in Blainville is currently working on establishing whether or not smart vehicles comply with current Canadian security standards. We have also learned from the Committee's Report that the Canadian Regulatory Cooperation Council is currently working with the United States on the various issues connected to plug-in and automated vehicles. Despite the numerous initiatives on record, so far only Ontario has introduced legislation specifically regulating the use of automated vehicles on the province's roads. 8. Québec will have to go down this path in order to fill the current legal vacuum9.  Conclusion As discussed in our bulletin of February 201710, the growing number of automated vehicles on the roads of Québec cannot be taken lightly. A legislative framework specifically providing for this kind of vehicle is of the essence when we consider that, by some projections, a quarter of the total worldwide vehicles will be defined as smart by 203511. Plug-in vehicles are already traveling on the roads of Québec, as are various levels of automated vehicles. It is therefore vital for all levels of government to catch up with these technologies. Regulating the driving of smart vehicles is a hot topic pertaining to the development of artificial intelligence. As such, it needs to be followed closely.   It is estimated that up to 94% of road accidents are caused by human error, see Standing Senate Committee on Transport and Communications, "Driving Change: Technology and the Future of the Automated Vehicle", Ottawa, January 2018, page 29. Standing Senate Committee on Transport and Communications, "Driving Change: Technology and the Future of the Automated Vehicle", Ottawa, January 2018. MCKENNA, Alain, La Presse, « Véhicules autonomes : « Ce sont les lois et le gouvernement qui nous freinent », Montréal, 1 February 2018, online:  http://auto.lapresse.ca/technologies/201802/01/01-5152247-vehicules-autonomes-ce-sont-les-lois-et-le-gouvernement-qui-nous-freinent.php. See GAGNÉ, Léonie, Need to Know, Bulletin Lavery, de Billy, “Autonomous vehicles in Québec: unanswered questions” Montreal, February 2017. Standing Senate Committee on Transport and Communications, "Driving Change: Technology and the Future of the Automated Vehicle", Ottawa, January 2018, page 65. Automobile Insurance Act of Québec, CQLR c. A-25.Automobile insurance falls under provincial jurisdiction. Pilot Project - Automated Vehicles, O Reg 306/15. The Government of Québec is currently assessing Bill 165, which aims at, among other things, amending the Highway Safety Code and regulating the driving of autonomous vehicles. Supra, note 4. Boston Consulting Group, (2016), Autonomous Vehicle Adoption Study.

    Read more
  • Artificial Intelligence, Implementation and Human Resources

    In this era of a new industrial revolution, dubbed as “Industry 4.0”, businesses are facing sizable technological challenges. Some refer to smart plants or the industry of the future. This revolution is characterized by the advent of new technology that allows for the “smart” automation of human activity. The aim of this technological revolution is to increase productivity, efficiency and flexibility. In some cases, it means a radical change to the corporate value chain. Artificial intelligence is an integral part of the new era. Dating back to the mid-1950s, it is typically defined as the simulation of human intelligence by machines. Artificial intelligence aims to substitute, supplement and amplify practically all tasks currently performed by humans1, becoming in effect a serious competitor to human beings in the job market. Over the past few years, the advent of deep learning and other advanced learning techniques for machines and computers have given rise to several industrial applications that have the potential to revolutionize how businesses organize the workplace. It is believed that artificial intelligence could drive a 14% increase in global GDP by 2030, a $15.7 trillion potential contribution to the global economy annually2. The productivity gains in the workplace created by artificial intelligence alone could represent half that amount. It goes without saying that the job market will have to adjust. A study published a few years ago predicted that within twenty years, close to 50% of jobs in the United States could be completely or partially automated3. In 2016, an OCDE study concluded that on average 9% of jobs in the 21 OECD countries would be at a high risk of automation4, and some experts even go so far as to claim that 85% of jobs that workers will be doing in 2030 haven’t been invented yet!5 At the very least, this data shows that while human beings are still indispensable, the job market will be strongly influenced by artificial intelligence. Whether due to the transformation of tasks, the disappearance of jobs or the creation of new trades, disruptions in the workplace are to be expected and businesses will have to deal with them. The arrival of artificial intelligence thus appears to be inevitable. In some cases, this technology will lead to a significant competitive advantage. Innovative businesses will stand out and thrive.  However, in addition to the major investments that will be required, the implementation of this new technological tool will require time, effort and changes to work methods. Implementation As an entrepreneur, you have no choice but to adapt to this new reality. Not only will your employees be affected by the organizational change, they will also have to be involved to ensure its success. During the implementation phase, you may discover that new skills will be required to adjust to your new technology. It is also very likely that some of your employees and managers will be adverse to the change. This would be a normal reaction since as humans we tend to respond negatively to any sort of change. A change in the work environment can lead to a sense of insecurity, requiring that employees adopt new behaviours or work methods6 and dragging them out of their comfort zone. An employee’s fears can also be the result of misperceptions. Potential impacts must be carefully considered before your new technology arrives. The failure rate for organizational change is over 70%. It is believed that the high failure rate for the adoption of new technology is due to the fact that the human aspect is often overlooked in favour of the technological or operational benefits of implementing the technology7. Failure can lead to higher costs for introducing the new tool, productivity losses or the abandoning of the initiative. Advance planning is especially important when implementing artificial intelligence to identify any challenges related to its integration in your business. It is important that smart technology be implemented by skilled employees who share the business’ values to ensure the new system does not perpetuate unwanted behaviours. To help with your planning, here are a few questions to stimulate discussion: Implementation What is the objective of the new technology, its advantages and disadvantages? Who will be in charge of the project? What skills will be needed to implement the technology in the organization? Which employees will be responsible for implementing the technology? What information and training should they be given? Work organization What duties will be replaced or affected by the new technology and how will they be affected? What new tasks will be created after the new technology is set up? Will positions be abolished, staff transferred or jobs lost? What terms of the collective agreement will have to be considered in terms of transfers, layoffs and technological change? What notice and severance should be anticipated if there are job losses? What positions will have to be created after the technology is set up? What new skills will be required for these positions? How and when will new positions be filled? How will the users of the technology be trained? Communication Who will be in charge of communication? Should you set up communication tools and a communication plan? In what form will such communication be made and how often? When and how will employees and managers be informed of the arrival of the new technology, its purpose, its advantages and the impacts on the organization? When and how will the job losses, labour transfers and new positions be announced? What tools will be used to reassure employees and eliminate misperceptions? Mobilization What actions can be taken to engage employees and managers in the project? What are the likely reactions to the change and how can they be lessened or eliminated? What tools can managers be given to help them oversee the change? This list is not meant to be exhaustive but it can be a starting point for considering the potential impacts of new smart technology on your employees. Bear in mind that good communication with your employees and their commitment could make a difference between the success or failure of the technological change. Lavery Legal Lab on Artificial Intelligence (L3IA) Lavery has set up the Lavery Legal Lab on Artificial Intelligence (L3IA) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries.   Spyros Makridakis, The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms, School of Economic Sciences and Business, Neapolis University Paphos, 2017 Sizing the prize, PWC, 2017 Carl Benedikt Frey and Michael A. Osborne, The future of employment: How susceptible are jobs to computarisation  Oxford University, 2013 Melanie Arntz, Terry Gregory, Ulrich Zierahn, The Risk of Automation for Jobs in OECD Countries, OECD Social, Employment and Migration Working Papers, 2016 Emerging Technologies' Impact on Society & Work in 2030, Institute for the Future and Dell Technologies, 2017 Simon L. Dolan, Éric Gosselin and Jules Carrière, Psychologie du travail et comportement organisationnel, 4th ed., Gaétan Morin Éditeur, 2012 Yves-Chantal Gagnon, Les trois leviers stratégiques de la réussite du changement technologique, Télescope - Revue d’analyse comparée en administration publique, École nationale d’administration publique du Québec, fall 2008

    Read more
  • Intellectual Property and Artificial Intelligence

    Although artificial intelligence has been evolving constantly in the past few years, the law sometimes has difficulty keeping pace with such developments. Intellectual property issues are especially important: businesses investing in these technologies must be sure that they can take full advantage of the commercial benefits that such technologies provide. This newsletter provides an overview of the various forms of intellectual property that are applicable to artificial intelligence. The initial instinct of many entrepreneurs would be to patent their artificial intelligence processes. However, although in some instances such a course of action would be an effective method of protection, obtaining a patent is not necessarily the most appropriate form of protection for artificial intelligence or software technologies generally. Since the major Supreme Court of the United States decision in Alice Corp. v. CLS Bank International1, it is now acknowledged that applying abstract concepts in the IT environment will not suffice to transform such concepts into patentable items. For instance, in light of that decision, a patent that had been issued for an expert system (which is a form of artificial intelligence) was subsequently invalidated by a U.S. court.2 In Canada, case law has yet to deal specifically with artificial intelligence systems. However, the main principles laid down by the Federal Court of Appeal in Schlumberger Canada Ltd. v. Canada (Commissioner of Patents)3 are still relevant to the topic. In that case, it was decided that a method of collecting, recording and analyzing data using a computer programmed on the basis of a mathematical formula was not patentable. However, in a more recent ruling, the same Court held that a data-processing technique may be patentable if it “[…] is not the whole invention but only one of a number of essential elements in a novel combination.”4 The unpatentability of an artificial intelligence algorithm in isolation is therefore to be expected. In Europe, according to Article 52 of the 1973 European Patent Convention, computer programs are not patentable. Thus the underlying programming of an artificial intelligence system would not be patentable under this legal system. Copyright is perhaps the most obvious form of intellectual property for artificial intelligence. Source codes have long been recognized as “works” within the meaning of the Canadian Copyright Act and in similar legislation in most other countries. Some jurisdictions have even enacted laws specifically aimed at software protection.5 On this issue, an earlier Supreme Court of Canada ruling in Apple Computer, Inc. v. Mackintosh Computers Ltd6 is of some interest: In that case, the Court held that computer programs embedded in ROM (read only memory) chips are works protected by copyright. A similar conclusion was reached earlier by a US Court.7 These decisions are meaningful with respect to artificial intelligence systems because they extend copyright protection not only to the codes programmed in complex languages or on advanced artificial intelligence platforms but also to the resulting object code, even on electronic media such as ROM chips. Copyright however does not protect ideas or the general principles of a particular code; it only protects the expression of those ideas or principles. In addition to copyright, the protection afforded by trade secrets should not be underestimated. More specifically, in the field of computer science, it is rare for customers to have access to the full source code. Furthermore, in artificial intelligence, source codes are usually quite complex, and it is precisely such technological complexity that contributes to its protection.8 This approach is particularly appealing for businesses providing software as a remote service. In these cases, users only have access to an interface, never to the source code or the compiled code. Therefore, it is almost impossible to reverse engineer such technology. However, when an artificial intelligence system is protected only by the concept of trade secret, there is always the risk that a leak originating with one or more employees will allow competitors to learn the source code, its structure or its particularities. It would be nearly impossible to prevent a source code from circulating online after such a leak. Companies may attempt to bolster the protection of their trade secrets with confidentiality agreements, but unfortunately this is insufficient where employees act in bad faith or in the case of industrial espionage. It would therefore be wise to implement knowledge-splitting measures within a company, so that only a restricted number of employees have access to all the critical information. Incidentally, it would be strategic for an artificial intelligence provider to make sure that its customers highlight its trademark, like the “Intel Inside” cooperative marketing strategy, to promote its system with potential customers. In the case of artificial intelligence systems sold commercially, it is also important to consider intellectual property in the learning outcomes of the systems resulting from its use. This raises the issue of ownership. Does a database generated by an artificial intelligence system developed by a software supplier while being used by one of its customers belong to the supplier or to this customer? Often, the contract between the parties will govern the situation. However a business may legitimately wish to retain the intellectual property in the databases generated by its internal use of the software, specifically where it provides it with its operational data or where it “trains” the artificial intelligence system through interaction with its employees. The desire to maintain the confidentiality of databases resulting from the use of artificial intelligence would suggest that they are assimilable to trade secrets. However, whether such databases are considered works in copyright law would be determined on a case-by-case basis. The court would also have to determine if the databases are the product of the exercise of the skill and judgment of one or more authors, as required by Canadian jurisprudence order to constitute “works”.9 Although situations where employees “train” an artificial intelligence system are more readily assimilable to an exercise of skill and judgment, cases where databases are constituted autonomously by a system could escape copyright protection “No copyright can subsist in […] data. The copyright must exist in the compilations analysis thereof”.10 In addition to the issues raised above, is the more prospective issue of the inventions created by artificial intelligence systems. So far, such systems have been used to identify research areas with opportunities for innovation. For example, data mining systems are already used to analyze patent texts, ascertain emerging fields of research, and even find “available” conceptual areas for potential patents.11 Artificial intelligence systems may be used in coming years to mechanically draft patent applications including patent claims covering potentially novel inventions.12 Can artificial intelligence have intellectual property rights, for instance, with respect to patents or copyrights? This is highly doubtful given that current legislation attributes rights to inventors and creators who must be natural persons, at least in Canada and the United States.13 The question then arises, would the intellectual property of the invention be granted to the designers of the artificial intelligence system? Our view is that at present the law is inappropriate in this regard because historically, in the area of patents, intellectual property was granted to the inventive person, and in the area of copyright, to the person who exercised skill and judgment. We also query whether a patent would be invalidated or a work enter the public domain on the ground that a substantial portion is generated by artificial intelligence (which is not the case in this newsletter!). Until that time, lawyers should familiarize themselves with the underlying concepts of artificial intelligence, and conversely, IT professionals should familiarize themselves with the concepts of intellectual property. For entrepreneurs who design or use artificial intelligence systems, constant consideration of intellectual property issues is essential to protect their achievements. Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal particularities, particularly the various branches and applications of artificial intelligence that will rapidly appear in all businesses and industries.   573 U.S._, 134 S. Ct. 2347 (2014). Vehicle Intelligence and Safety v. Mercedes-Benz, 78 F. Supp.3d 884 (2015), maintenue en appel Federal Circuit. No. 2015-1411 (U.S.). [1982] 1 C.F. 845 (C.A.F.). Canada (Procureur général) v. Amazon.com, inc., [2012] 2 RCF 459, 2011 CAF 328. For example, in Brazil: Lei do Software No. 9.609 du 19 février, 1998; en Europe : Directive 2009/24/CE concernant la protection juridique des programmes d’ordinateur. [1990] 2 RCS 209, 1990 CanLII 119 (CSC). Apple Computer, Inc. v. Franklin Computer Corp., 714 F.2d 1240 (3d Cir. 1983) (U.S.). Keisner, A., Raffo, J., & Wunsch-Vincent, S. (2015). Breakthrough technologies-Robotics, innovation and intellectual property (No. 30). World Intellectual Property Organization- Economics and Statistics Division. CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 CSC 13, [2004] 1 RCS 339. See, for example: : Geophysical Service Incorporated v. Canada-Nova-Scotia Offshore Petroleum Board, 2014 CF 450. See, for example: : Lee, S., Yoon, B., & Park, Y. (2009). An approach to discovering new technology opportunities: Keyword-based patent map approach. Technovation, 29(6), 481-497; Abbas, A., Zhang, L., & Khan, S. U. (2014). A literature review on the state-of-theart in patent analysis. World Patent Information, 37, 3-13. Hattenbach, B., & Glucoft, J. (2015). Patents in an Era of Infinite Monkeys and Artificial Intelligence. Stan. Tech. L. Rev., 19, 32. Supra, note 7.

    Read more
  • When artificial intelligence is discriminatory

    Artificial intelligence has undergone significant developments in the last few years, particularly in respect of what is now known as deep learning.1 This method is the extension of the neural networks which have been used for a few years for machine learning. Deep learning, as any other form of machine learning, requires that the artificial intelligence system be placed before various situations in order to react to situations which are similar to previous experiences. In the context of business, artificial intelligence systems are used, among other things, to serve the needs of customers, either directly or by supporting employees interventions. The quality of the services that the business provides is therefore increasingly dependent on the quality of these artificial intelligence systems. However, one must not make the mistake of assuming that such a computer system will automatically perform its tasks flawlessly and in compliance with the values of the business or its customers. For instance, researchers at the Carnegie Mellon University recently demonstrated that a system for presenting targeted advertising to Internet users systematically offered less well-paid positions to women than to men.2In other words, this system behaved in what could be called a sexist way. Although the researchers could not pinpoint the origin of the problem, they were of the view that it was probably a case of loss of control by the advertising placement services supplier over its automated system and they noted the inherent risks of large-scale artificial intelligence systems. Various artificial intelligence systems have had similar failures in the past, demonstrating racist behaviour, even to the point of forcing an operator to suspend access to its system.3 In this respect, the European Union passed in April 2016 a regulation pertaining to the processing of personal information which, except in some specific cases, prohibits automated decisions based on some personal data, including the “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation […]”.4 Some researchers wonder about the application of this regulation, particularly as discrimination appears in an incidental manner, without the operator of the artificial intelligence system intending it.5 In Québec, it is reasonable to believe that a business which would use an artificial intelligence system that would act in a discriminatory manner within the meaning of the Charter of Human Rights and Freedoms would be exposed to legal action even in the absence of a specific regulation such as that of the European Union. Indeed, the person responsible for an item of property such as an artificial intelligence system could incur liability in respect of the harm or damage caused by the autonomous action of such item of property. Furthermore, the failure to having put in place reasonable measures to avoid discrimination would most probably be taken into account in the legal analysis of such a situation. Accordingly, special vigilance is required when the operation of an artificial intelligence system relies on data already accumulated within the business, data from third parties (particularly what is often referred to as big data), or when the data will be fed to the artificial intelligence system by employees of the business or its users during the course of a “learning” period. All these data sources, which incidentally are subject to obligations under privacy laws, may be biased at various degrees. The effects of biased sampling are neither new nor are they restricted to the respect of human rights. It is a phenomenon which is well-known by statisticians. During the WW II, the U.S. Navy asked a mathematician named Abraham Wald to provide them with statistics on the parts of bomber planes which had been most hit for the purpose of determining what areas of these planes should be reinforced. Wald demonstrated that the data on the planes returning from missions was biased, as it did not take into account the planes that were taken down during these missions. The areas damaged on the returning planes did not need to be reinforced, rather the places which were not hit were the one that had to be. In the context of the operation of a business, an artificial intelligence system to which biased data is fed may thus make erroneous decisions – with disastrous consequences for the business on a human, economic and operation point of view. For instance, if an artificial intelligence system undergoes learning sessions conducted by employees of the business, their behaviour will undoubtedly be reflected in the system’s own subsequent behaviour. This may be apparent in the judgments made by the artificial intelligence system in respect of customer requests, but also directly in its capacity to adequately solve the technical problems submitted to it. Therefore, there is the risk of perpetuating the problematic behaviour of some employees. Researchers of the Machine Intelligence Research Institute have proposed various approaches to minimize the risks and make the machine learning of artificial intelligence systems consistent with its operator’s interests.6 According to these researchers, it would certainly be appropriate to adopt a prudent approach as to the objectives imposed on such systems in order to avoid them providing extreme or undesirable solutions. Moreover, it would be important to establish informed supervision procedures, through which the operator may ascertain that the artificial intelligence system performs, as a whole, in a manner consistent with expectations. From the foregoing, it must be noted that a business wishing to integrate an artificial intelligence system in its operations must take very seriously the implementation phase, during which the system will “learn” what is expected of it. It will be important to have in-depth discussions with the supplier on the operation and performance of his technology and to express as clearly as possible in a contract the expectations of the business as to the system to be implemented. The implementation of the artificial intelligence system in the business must be carefully planned and such implementation must be assigned to trustworthy employees and consultants who possess a high level of competence with respect to the relevant tasks. As to the supplier of the artificial intelligence system, it must be ensured that the data provided to him is not biased, inaccurate or otherwise defective, in such a way that the objectives set out in the contract as to the expected performance of the system may reasonably be reached, thus minimizing the risk of litigation arising from discriminatory or otherwise objectionable behaviour of the artificial intelligence system. Not only such litigation can be expensive, it could also harm the reputation of both the supplier and its customer. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598-617). IEEE; Datta, A., Tschantz, M. C., & Datta, A. (2015). Also see: Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112. Reese, H. (2016). Top 10 AI failures of 2016. The case of Tay, Microsoft’s system, has been much discussed in the media. Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Goodman, B., & Flaxman, S. (2016, June). EU regulations on algorithmic decision-making and a “right to explanation”. In ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Taylor, J., Yudkowsky, E., LaVictoire, P., & Critch, A. (2016). Alignment for advanced machine learning systems . Technical Report 20161, MIRI.

    Read more
  • Artificial intelligence and its legal challenges

    Is there a greater challenge than to write a legal article on an emerging technology that does not exist yet in its absolute form? Artificial intelligence, through a broad spectrum of branches and applications, will impact corporate and business integrity, corporate governance, distribution of financial products and services, intellectual property rights, privacy and data protection, employment, civil and contractual liability, and a significant number of other legal fields. What is artificial intelligence? Artificial intelligence is “the science and engineering of making intelligence machines, especially intelligent computer programs”.1 Essentially, artificial intelligence technologies aim to allow machines to mimic “cognitive” functions of humans, such as learning and problem solving, in order for them to conduct tasks that are normally performed by humans. In practice, the functions of artificial intelligence are achieved by accessing and analyzing massive data (also known as “big data”) via certain algorithms. As set forth in a report published by McKinsey & Company in 2013 on disruptive technologies, “[i]mportant technologies can come in any field or emerge from any scientific discipline, but they share four characteristics: high rate of technological change, broad potential scope of impact, large economic value that could be affected, and substantial potential for disruptive economic impact”.2 Despite the interesting debate over the impact of artificial intelligence on humanity,3 the development of artificial intelligence has been on an accelerated path in recent years and we witnessed some major breakthroughs. In March 2016, Google’s computer program AlphaGo beat a world champion Go player, Lee Sedol, by 4 to 1 in the ancient Chinese board game. The breakthroughs reignited the world’s interest in artificial intelligence. Technology giants like Google and Microsoft, to name a few, have increased their investments in the research and development of artificial intelligence. This article will discuss some of the applications of artificial intelligence from a legal perspective and certain areas of law that will need to adapt - or be adapted - to the complex challenges brought by current and new developments in artificial intelligence. Legal challenges Artificial intelligence and its potential impacts have been compared to those of the Industrial Revolution, a form of transition to new manufacturing processes using new systems and innovative applications and machines. Health care L’intelligence artificielle est certes promise à un bel avenir dans le Artificial intelligence certainly has a great future in the health care industry. Applications of artificial intelligence with abilities to analyze massive data can make such applications a powerful tool to predict drug performance and help patients find the right drug or dosage that matches with their situation. For example, IBM’s Watson Health program “is able to understand and extract key information by looking through millions of pages of scientific medical literature and then visualize relationships between drugs and other potential diseases”.4 Some features of artificial intelligence can also help to verify if the patient has taken his or her pills through an application on smartphones, which captures and analyzes evidence of medication ingestion. In addition to privacy and data protection concerns, the potential legal challenges faced by artificial intelligence applications in the health care industry will include civil and contractual liabilities. If a patient follows the recommendation made by an artificial intelligence system and it turns out to be the wrong recommendation, who will be held responsible? It also raises legitimate complex legal questions, combined with technological concerns, as to the reliability of artificial intelligence programs and software and how employees will deal with such applications in their day-to-day tasks. Customer services A number of computer programs have been created to make conversation with people via audio or text messages. Companies use such programs for their customer services or for entertainment purposes, for example in messaging platforms like Facebook, Messenger and Snapchat. Although such programs are not necessarily pure applications of artificial intelligence, some of their features, actual or in development, could be considered as artificial intelligence. When such computer programs are used to enter into formal contracts (e.g., placing orders, confirming consent, etc.), it is important to make sure the applicable terms and conditions are communicated to the individual at the end of the line or that a proper disclaimer is duly disclosed. Contract enforcement questions will inevitably be raised as a result of the use of such programs and systems. Financial industry and fintech In recent years, many research and development activities have been carried out in the robotic, computer and tech fields in relation to financial services and the fintech industry. The applications of artificial intelligence in the financial industry will vary from a broad spectrum of branches and programs, including analyzing customers’ investing behaviours or analyzing big data to improve investment strategies and the use of derivatives. Legal challenges associated with artificial intelligence’s applications in the financial industry could be related, for example, to the consequences of malfunctioning algorithms. The constant relationship between human interventions and artificial intelligence systems, for example, in a stock trading platform, will have to be carefully set up to avoid, or at least confine, certain legal risks. Autonomous vehicles Autonomous vehicles are also known as “self-driving cars”, although the vehicles currently permitted to be on public roads are not completely autonomous. In June 2011, the state of Nevada became the first jurisdiction in the world to allow autonomous vehicles to operate on public roads. According to Nevada law, an autonomous vehicle is a motor vehicle that is “enabled with artificial intelligence and technology that allows the vehicle to carry out all the mechanical operations of driving without the active control or continuous monitoring of a natural person”.5 Canada has not adopted any law to legalize autonomous cars yet. Among the significant legal challenges facing autonomous cars, we note the issues of liability and insurance. When a car drives itself and an accident happens, who should be responsible? (For additional discussion of this subject under Québec law, refer to the Need to Know newsletter, “Autonomous vehicles in Québec: unanswered questions” by Léonie Gagné and Élizabeth Martin-Chartrand.) We also note that interesting arguments will be raised respecting autonomous cars carrying on commercial activities in the transportation industry such as shipping and delivery of commercial goods. Liability regimes The fundamental nature of artificial intelligence technology is itself a challenge to contractual and extra-contractual liabilities. When a machine makes or pretends to make autonomous decisions based on the available data provided by its users and additional data autonomously acquired from its own environment and applications, its performance and the end-results could be unpredictable. In this context, Book Five of the Civil Code of Québec (CCQ) on obligations brings highly interesting and challenging legal questions in view of anticipated artificial intelligence developments: Article 1457 of the CCQ states that: Every person has a duty to abide by the rules of conduct incumbent on him, according to the circumstances, usage or law, so as not to cause injury to another. Where he is endowed with reason and fails in this duty, he is liable for any injury he causes to another by such fault and is bound to make reparation for the injury, whether it be bodily, moral or material in nature. He is also bound, in certain cases, to make reparation for injury caused to another by the act, omission or fault of another person or by the act of things in his custody. Article 1458 of the CCQ further provides that: Every person has a duty to honour his contractual undertakings. Where he fails in this duty, he is liable for any bodily, moral or material injury he causes to the other contracting party and is bound to make reparation for the injury; neither he nor the other party may in such a case avoid the rules governing contractual liability by opting for rules that would be more favourable to them. Article 1465 of the CCQ states that: The custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault. The issues of foreseeable damages or direct damages, depending on the liability regime, and of the “autonomous act of the thing” will inescapably raise interesting debates in the context of artificial intelligence applications in the near future. In which circumstances the makers or suppliers of artificial intelligence applications, the end-users and the other parties benefiting from such applications could be held liable – or not – in connection with the results produced by artificial intelligence applications and the use of such results? Here again, the link between human interventions - or the absence of human interventions - with artificial intelligence systems in the global chain of services, products and outcomes provided to a person will play an important role in the determination of such liability. Among the questions that remain unanswered, could autonomous systems using artificial intelligence applications be “personally” held liable at some point? And how are we going to deal with potential legal loopholes endangering the rights and obligations of all parties interacting with artificial intelligence? In January 2017, the Committee on Legal Affairs of European Union (“EU Committee”) submitted a motion to the European Parliament which calls for legislation on issues relating to the rising of robotics. In the recommendations of the EU Committee, liability law reform is raised as one of the crucial issues. It is recommended that “the future legislative instrument should provide for the application of strict liability as a rule, thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behavior of a robot and the damage suffered by an injured party”.6 The EU Committee also suggests that the European Parliament considers implementing a mandatory insurance scheme and/or a compensation fund to ensure the compensation of the victims. What is next on the artificial intelligence front? While scientists are developing artificial intelligence at a speed faster than ever in many different fields and sciences, some areas of the law may need to be adapted to deal with associated challenges. It is crucial to be aware of the legal risks and to make informed decisions when considering the development and use of artificial intelligence. Artificial intelligence will have to learn to listen, to appreciate and understand concepts and ideas, sometimes without any predefined opinions or beacons, and be trained to anticipate, just like human beings (even if some could argue that listening and understanding remain difficult tasks for humans themselves). And at some point in time, artificial intelligence developments will get their momentum when two or more artificial intelligence applications are combined to create a superior or ultimate artificial intelligence system. The big question is, who will initiate such clever combination first, humans or the artificial intelligence applications themselves? John McCarthy, What is artificial intelligence?, Stanford University. Disruptive technologies: Advances that will transform life, business, and the global economy, McKinsey Global Institute, May 2013. Alex Hern, Stephen Hawking: AI will be “either best or worst thing” for humanity, theguardian. Engene Borukhovich, How will artificial intelligence change healthcare?, World Economic Forum. Nevada Administrative Code Chapter 482A-Autonomous Vehicles, NAC 482A.010. Committee on Legal Affairs, Draft report with recommendations to the Commission on Civil Law Rules on Robotics, article 27. (2015/2103 (INL))

    Read more
  • Artificial intelligence: contractual obligations beyond the buzzwords

    Can computers learn and reason? If so, what are the limitations of the tasks that they can be given? These questions have been the subject of countless debate as far back as 1937, when Alan Turing published his work on computable numbers1. Many researchers have devoted themselves to developing methods that would allow computers to interact more easily with human beings and integrate processes used to learn from the situations encountered. Generally speaking, the aim was to have computers think and react like a human being would. In the early 1960s, Marvin Minsky, a noted MIT researcher, outlined what he regarded as the steps along the path to artificial intelligence2. The power of the latest computers and the capacity to store phenomenal amounts of information now allow for artificial intelligence to be integrated in business and daily life, using processes known as “machine learning”, “data mining” or “deep learning”, the last of which has undergone rapid development in recent years3. The use of artificial intelligence in business raises many legal issues that are of crucial importance when companies enter into contracts respecting the sale or purchase of artificial intelligence products and services. From a contractual perspective, it is important to properly frame the obligations and expectations of each party. For suppliers of artificial intelligence products, a major issue is their liability in the event of product malfunctions. For example, could the designers of an artificial intelligence system used as an aid in making medical decisions be held liable, directly or indirectly, for a medical mistake resulting from erroneous information or suggestions given by the system? It may be appropriate to ensure that such contracts expressly require that the professionals using such systems maintain control over the results, regardless of the context in which the system is operating, be it medical, engineering or business management. In return, companies wishing to use such products must clearly frame their targeted objectives. This includes not only a stated performance objective for the artificial intelligence system, but also a definition of what would constitute product failure and the legal consequences thereof. For example, in a contract for the use of artificial intelligence in production management, is the objective to improve performance or reduce specific problems? And what happens if the desired results are not achieved? Another major issue is the intellectual property of the data integrated and generated by a particular artificial intelligence product. Many artificial intelligence systems require the use of a large volume of the company’s data for such systems to acquire the necessary learning “experience”. However, who owns that data and who owns the results what the artificial intelligence system has learned? For example, for an artificial intelligence system to become effective, a company would have to supply an enormous quantity of data and invest considerable human and financial resources to guide its learning. Does the supplier of the artificial intelligence system acquire any rights to such data? Can it use what its artificial intelligence system learned in one firm to benefit its other clients? In extreme cases, this would mean that the experience acquired by a system in a particular company would benefit its competitors. Where the artificial intelligence system is used in applications targeting consumers or company employees, the issues related to confidentiality of the data used by the artificial intelligence system and protection of the privacy of such persons should not be overlooked. The above are some of the contractual issues that must be considered and addressed to prevent problems from arising. Lavery Legal Lab on Artificial Intelligence (L3AI) We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations. For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law. In our following publications, the members of our Lavery Legal Lab on Artificial Intelligence (L3AI) will more specifically analyze certain applications of artificial intelligence in various sectors and industries. Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London mathematical society, 2(1), 230-265. Minsky, M. (1961). Steps toward artificial intelligence. Proceedings of the IRE, 49(1), 8-30. See: LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.

    Read more
  • Artificial Intelligence and the 2017 Canadian Budget: is your business ready?

    The March 22, 2017 Budget of the Government of Canada, through its “Innovation and Skills Plan” (http://www.budget.gc.ca/2017/docs/plan/budget-2017-en.pdf) mentions that Canadian academic and research leadership in artificial intelligence will be translated into a more innovative economy and increased economic growth. The 2017 Budget proposes to provide renewed and enhanced funding of $35 million over five years, beginning in 2017–2018 to the Canadian Institute for Advanced Research (CIFAR) which connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics including artificial intelligence and deep learning. These measures are in addition to a number of interesting tax measures that support the artificial intelligence sector at both the federal and provincial levels. In Canada and in Québec, the Scientific Research and Experimental Development (SR&ED) Program provides a twofold benefit: SR&ED expenses are deductible from income for tax purposes and a SR&ED investment tax credit (ITC) for SR&ED is available to reduce income tax. In some cases, the remaining ITC can be refunded. In Québec, a refundable tax credit is also available for the development of e-business, where a corporation mainly operates in the field of computer system design or that of software edition and its activities are carried out in an establishment located in Québec. This 2017 Budget aims to improve the competitive and strategic advantage of Canada in the field of artificial intelligence, and, therefore, that of Montréal, a city already enjoying an international reputation in this field. It recognises that artificial intelligence, despite the debates over ethical issues that currently stir up passions within the international community, could help generate strong economic growth, by improving the way in which we produce goods, deliver services and tackle all kinds of social challenges. The Budget also adds that artificial intelligence “opens up possibilities across many sectors, from agriculture to financial services, creating opportunities for companies of all sizes, whether technology start-ups or Canada’s largest financial institutions”. This influence of Canada on the international scene cannot be achieved without government supporting research programs and our universities contributing their expertise. This Budget is therefore a step in the right direction to ensure that all the activities related to artificial intelligence, from R&D to marketing, as well as design and distributions, remain here in Canada. The 2017 budget provides $125 million to launch a Pan-Canadian Artificial Intelligence Strategy for research and talent to promote collaboration between Canada’s main centres of expertise and reinforce Canada’s position as a leading destination for companies seeking to invest in artificial intelligence and innovation. Lavery Legal Lab on Artificial Intelligence (L3AI) We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations. For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law. In our following publications, the members of our Lavery Legal Lab on Artificial Intelligence (L3AI) will more specifically analyze certain applications of artificial intelligence in various sectors and industries.

    Read more