Technology

Overview

As an important component of the Canadian economy, the booming technology industry is changing the way we do business. The major impact of technological innovations and their use by companies creates new challenges for both the companies that design and develop them and those that benefit from their implementation. The legal issues surrounding technological innovations and the business models and processes they give rise to are sometimes difficult to identify in the short term, but it is important for companies to consider them carefully to protect their rights and achieve their objectives. 

  1. Can artificial intelligence be designated as an inventor in a patent application?

    Artificial intelligence (“AI”) is becoming increasingly sophisticated, and the fact that this human invention can now generate its own inventions opens the door to new ways of conceptualizing the notion of “inventor” in patent law. In a recent ruling, the Supreme Court of the United Kingdom (“UK Supreme Court”) however found that an artificial intelligence system cannot be the author of an invention within the meaning of the applicable regulations under which patents are granted. This position is consistent with that of several courts around the world that have already ruled on the issue. But what of Canada, where the courts have yet to address the matter? In this bulletin, we will take a look at the decisions handed down by the UK Supreme Court and its counterparts in other countries before considering Canada’s position on the issue. In Thaler (Appellant) v Comptroller-General of Patents, Designs and Trade Mark,1 the UK Supreme Court ruled that “an inventor must be a person”. Summary of the decision In 2018, Dr. Stephen Thaler filed patent applications for two inventions described as having been generated by an autonomous AI system. The machine in question, DABUS, was therefore designated as the inventor in the applications. Dr. Thaler claimed that, as the owner of DABUS, he was entitled to file patent applications for inventions generated by his machine. That being so, he alleged that he was not required to name a natural person as the inventor. Both the High Court of Justice and the Court of Appeal dismissed Dr. Thaler’s appeal from the decision of the Intellectual Property Office of the United Kingdom not to proceed with the patent applications, in particular because the designated inventor was not valid under the Patents Act 1977. The UK Supreme Court, the country’s final court of appeal, also dismissed Dr. Thaler’s appeal. In a unanimous decision, it concluded that the law is clear in that “an inventor within the meaning of the 1977 Act must be a natural person, and DABUS is not a person at all, let alone a natural person: it is a machine”.2 Although there was no doubt that DABUS had created the inventions in question, that did not mean that the courts could extend the notion of inventor, as defined by law, to include machines. An ongoing trend The UK Supreme Court is not the first to reject Dr. Thaler’s arguments. The United States,3 the European Union4 and Australia5 have adopted similar positions, concluding that only a natural person can qualify as an inventor within the meaning of the legislation applicable in their respective jurisdictions. The UK ruling is part of the Artificial Inventor Project’s cross-border attempt to ensure that the DABUS machine—and AI in general—is recognized as a generative tool capable of generating patent rights for the benefit of AI system owners. To date, only South Africa has issued a patent to Dr. Thaler, naming DABUS as the inventor.6 This country is the exception that proves the rule. It should however be noted that the Companies and Intellectual Property Commission of South Africa does not review applications on their merits. As such, no reason was given for considering AI as the inventor. More recently, in February of this year, the United States Patent and Trademark Office issued a guidance on AI-assisted inventions. The guidance confirms the judicial position and states in particular that “a natural person must have significantly contributed to each claim in a patent application or patent”.7 What about Canada? In 2020, Dr. Thaler also filed a Canadian patent application for inventions generated by DABUS.8 The Canadian Intellectual Property Office (“CIPO”) issued a notice of non-compliance in 2021, establishing its initial position as follows: Because for this application the inventor is a machine and it does not appear possible for a machine to have rights under Canadian law or to transfer those rights to a human, it does not appear this application is compliant with the Patent Act and Rules.9 However, CIPO specified that it was open to receiving the applicant’s arguments on the issue, as follows: Responsive to the compliance notice, the applicant may attempt to comply by submitting a statement on behalf of the Artificial Intelligence (AI) machine and identify, in said statement, himself as the legal representative of the machine.10 To date, CIPO has issued no notice of abandonment and the application remains active. Its status in Canada is therefore unclear. It will be interesting to see whether Dr. Thaler will try to sway the Canadian courts to rule in his favour after many failed attempts in other jurisdictions around the world, and most recently in the UK Supreme Court. At first glance, the Patent Act11 (the “Act”) does not prevent an AI system from being recognized as the inventor of a patentable invention. In fact, the term “inventor” is not defined in the Act. Furthermore, nowhere is it stated that an applicant must be a “person,” nor is there any indication to that effect in the provisions governing the granting of patents. The Patent Rules12 offer no clarification in that regard either. The requirement implied by the clear use of the term “person” in the wording of the relevant sections of the law is important: It was a key consideration that the UK Supreme Court analyzed in Thaler. Case law on the subject is still ambiguous. According to the Supreme Court of Canada, given that the inventor is the person who took part in conceiving the invention, the question to ask is “[W]ho is responsible for the inventive concept?”13 That said, however, we note that the conclusion reached was that a legal person—as opposed to a natural person—cannot be considered an inventor.14 The fact is that the Canadian courts have never had to rule on the specific issue of recognizing AI as an inventor, and until such time as the courts render a decision or the government takes a stance on the matter, the issue will remain unresolved. Conclusion Given that Canadian law is not clear on whether AI can be recognized as an inventor, now would be a good time for Canadian authorities to clarify the issue. As the UK Supreme Court has suggested, the place of AI in patent law is a current societal issue, one that the legislator will ultimately have to settle.15 As such, it is only a matter of time before the Act is amended or CIPO issues a directive. Moreover, in addition to having to decide whether AI legally qualifies as an inventor, Canadian authorities will have to determine whether a person can be granted rights to an invention that was actually created by AI. The question as to whether an AI system owner can hold a patent on an invention generated by their machine was raised in Thaler. Once again, unlike the UK’s patent act,16 our Patent Act does not close the door to such a possibility. Canadian legislation contains no comprehensive list of the categories of persons to whom a patent may be granted, for instance. If we were to rewrite the laws governing intellectual property, given that the main purpose such laws is to encourage innovation and creativity, perhaps a better approach would be to allow AI system owners to hold patent rights rather than recognizing the AI as an inventor. Patent rights are granted on the basis of an implicit understanding: A high level of protection is provided in exchange for sufficient disclosure to enable a person skilled in the art to reproduce an invention. This ensures that society benefits from such inventions and that inventors are rewarded. Needless to say, arguing that machines need such an incentive is difficult. Designating AI as an inventor and granting it rights in that respect is therefore at odds with the very purpose of patent protection. That said, an AI system owner who has invested time and energy in designing their system could be justified in claiming such protection for the inventions that it generates. In such a case and given the current state of the law, the legislator would likely have to intervene. Would this proposed change spur innovation in the field of generative AI? We are collectively investing a huge amount of “human” resources in developing increasingly powerful AI systems. Will there come a time when we can no longer consider that human resources were involved in making AI-generated technologies? Should it come to that, giving preference to AI system owners could become counterproductive. In any event, for the time being, a sensible approach would be to emphasize the role that humans play in AI-assisted inventions, making persons the inventors rather than AI. As concerns inventions conceived entirely by an AI system, trade secret protection may be a more suitable solution. The professionals on our intellectual property team are at your disposal to assist you with patent registration and provide you with a clearer understanding of the issues involved. [2023] UKSC 49 [Thaler]. Ibid., para. 56. See the decision of the United States Court of Appeals for the Federal Circuit in Thaler v Vidal, 43 F. 4th 1207 (2022), application for appeal to the Supreme Court of the United States dismissed. See the decision of the Boards of Appeal of the European Patent Office in J 0008/20 (Designation of inventor/DABUS) (2021), request to refer questions to the Enlarged Board of Appeal denied. See the decision of the Full Court of the Federal Court of Australia in Commissioner of Patents v Thaler, [2022] FCAFC 62, application for special leave to appeal to the High Court of Australia denied. ZA 2021/03242. Federal Register: Inventorship Guidance for AI-Assisted Inventions. CA 3137161. Notice from CIPO dated February 11, 2022, in Canadian patent application 3137161. Ibid. R.S.C., 1985, c. P-4. SOR/2019-251. Apotex Inc.v. Wellcome Foundation Ltd., 2002 SCC 77 at paras. 96–97. Sarnoff Corp. v. Canada (Attorney General), 2008 FC 712, para. 9. Thaler, paras. 48–49, 79. Ibid., para. 79.

    Read more
  2. The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  3. Smart product liability: issues and challenges

    Introduction In 2023, where do we stand in terms of liability where smart products are concerned? The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.  These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being. For the purposes of our analysis, the term “smart products” refers to products that have: Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control. Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings. Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2 These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”). Applying general law rules of liability to smart products Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law. The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products. There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4  But where does human accountability come in? Lack of transparency in AI and product liability When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5 The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8 Manufacturers have two means to absolve themselves from liability:9 A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge. This last means is specifically aimed at the risks inherent to technological innovation.10 That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability. Challenges in apportioning liability and human accountability There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor. In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability. Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.” There are two types of custodians of smart products: The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody); The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13. Either one of these custodians could be held liable should it contribute to the harm through its own fault. As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex. C-27: canadian bill on artificial intelligence Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act. If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15 Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights. In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17 Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18 Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”. Oversight: Providing Systems over which human oversight can be exercised. Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes. Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm. Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems. Robustness: Ensuring that Systems operate as intended. To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19 Conclusion Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries. That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country. Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu). GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq. Consumer Protection Act, CQLR c. P-40.1, s. 38. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72. Brousseau, para. 100. Brousseau, para. 102. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry. Bill C-27, summary and s. 5(1). The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca. Bill C-27, s. 39(a). AIDA, Companion document Bill C-27, s. 8. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.

    Read more
  1. Lexpert Recognizes Three Partners as Leading Technology and Health Lawyers in Canada

    On June 17, 2024, Lexpert recognized the expertise of three of our partners in its 2024 Lexpert Special Edition: Technology and Health. Chantal Desjardins, Isabelle Jomphe, Béatrice T Ngatcha, Selena Lu and André Vautour now rank among Canada’s leaders in the area of Technology and Health. Chantal Desjardins is a partner, lawyer and trade-mark agent in Lavery’s intellectual property group. She contributes actively to the development of her clients’ rights in this field, which includes the protection of trademarks, industrial designs, copyright, trade secrets, domain names and other related forms of intellectual property, in order to promote her clients’ business goals. Isabelle Jomphe is a partner, lawyer and trade-mark agent in Lavery’s intellectual property group. Ms. Jomphe’s expertise includes trademark, industrial design, copyright, domain names, trade secrets, technology transfers, as well as advertising law, labelling and Charter for the French Language regulations. She is known for providing strategic and practical advice in all aspects of IP law, with an emphasis in the field of trademarks. She advises clients in trade-mark clearance searches, filing strategies, opposition proceedings and litigation in Canada and abroad. Béatrice T Ngatcha is a lawyer and patent agent in Lavery’s intellectual property group. She is a patent agent registered to practice in Canada and the United States. She is also a lawyer called to the Ontario Bar and a member of the Quebec Bar (c.j.c). Béatrice holds a doctoral degree in chemistry from Université Laval and has been a post-doctoral fellow at the National Research Council in Ottawa. In addition to a busy patent prosecution practice serving Canadian and foreign clients, Beatrice’s expertise in sought in the areas of intellectual property litigation, trade secrets, due diligence, strategy, portfolio value building, licensing and arbitration. Selena Lu is a partner in the Business Law group and focuses her practice on mergers and acquisitions. She frequently advises clients abroad on commercial law matters relating to investment and expansion in Canada. Over the years, Selena has developed an interest and acquired significant experience in supporting customers in their technological change. On a day-to-day basis, she advises clients on the legal impacts of the introduction of new technologies. Moreover, she oversees the development of the structure and negotiation of mergers and acquisitions along with complex business relationships for developing, marketing and acquiring technologies. André Vautour practices in the fields of corporate and commercial law and is particularly interested in corporate governance, strategic alliances, joint ventures, investment funds and mergers and acquisitions of private corporations. He practises in the field of technology law (drafting technology development and transfer agreements, licensing agreements, distribution agreements, outsourcing agreements, and e-commerce agreements). About Lavery Lavery is the leading independent law firm in Quebec. Its more than 200 professionals, based in Montréal, Quebec, Sherbrooke and Trois-Rivières, work every day to offer a full range of legal services to organizations doing business in Quebec. Recognized by the most prestigious legal directories, Lavery professionals are at the heart of what is happening in the business world and are actively involved in their communities. The firm's expertise is frequently sought after by numerous national and international partners to provide support in cases under Quebec jurisdiction.

    Read more