Artificial Intelligence

Overview

Take the AI knowledge test!

Lavery Legal Lab on Artificial Intelligence (L3IA)

 

We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations.

For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3IA) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries.

"As soon as a company knows what it wants, tools exist, it must make the best use of them, and our Lab is there to advise it in this regard. "

 

The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law.

Discover our lexicon which demystifies the most commonly used terms in artificial intelligence:

Lexicon on Artificial IntelligenceClick here to learn more

 

  1. Can artificial intelligence be designated as an inventor in a patent application?

    Artificial intelligence (“AI”) is becoming increasingly sophisticated, and the fact that this human invention can now generate its own inventions opens the door to new ways of conceptualizing the notion of “inventor” in patent law. In a recent ruling, the Supreme Court of the United Kingdom (“UK Supreme Court”) however found that an artificial intelligence system cannot be the author of an invention within the meaning of the applicable regulations under which patents are granted. This position is consistent with that of several courts around the world that have already ruled on the issue. But what of Canada, where the courts have yet to address the matter? In this bulletin, we will take a look at the decisions handed down by the UK Supreme Court and its counterparts in other countries before considering Canada’s position on the issue. In Thaler (Appellant) v Comptroller-General of Patents, Designs and Trade Mark,1 the UK Supreme Court ruled that “an inventor must be a person”. Summary of the decision In 2018, Dr. Stephen Thaler filed patent applications for two inventions described as having been generated by an autonomous AI system. The machine in question, DABUS, was therefore designated as the inventor in the applications. Dr. Thaler claimed that, as the owner of DABUS, he was entitled to file patent applications for inventions generated by his machine. That being so, he alleged that he was not required to name a natural person as the inventor. Both the High Court of Justice and the Court of Appeal dismissed Dr. Thaler’s appeal from the decision of the Intellectual Property Office of the United Kingdom not to proceed with the patent applications, in particular because the designated inventor was not valid under the Patents Act 1977. The UK Supreme Court, the country’s final court of appeal, also dismissed Dr. Thaler’s appeal. In a unanimous decision, it concluded that the law is clear in that “an inventor within the meaning of the 1977 Act must be a natural person, and DABUS is not a person at all, let alone a natural person: it is a machine”.2 Although there was no doubt that DABUS had created the inventions in question, that did not mean that the courts could extend the notion of inventor, as defined by law, to include machines. An ongoing trend The UK Supreme Court is not the first to reject Dr. Thaler’s arguments. The United States,3 the European Union4 and Australia5 have adopted similar positions, concluding that only a natural person can qualify as an inventor within the meaning of the legislation applicable in their respective jurisdictions. The UK ruling is part of the Artificial Inventor Project’s cross-border attempt to ensure that the DABUS machine—and AI in general—is recognized as a generative tool capable of generating patent rights for the benefit of AI system owners. To date, only South Africa has issued a patent to Dr. Thaler, naming DABUS as the inventor.6 This country is the exception that proves the rule. It should however be noted that the Companies and Intellectual Property Commission of South Africa does not review applications on their merits. As such, no reason was given for considering AI as the inventor. More recently, in February of this year, the United States Patent and Trademark Office issued a guidance on AI-assisted inventions. The guidance confirms the judicial position and states in particular that “a natural person must have significantly contributed to each claim in a patent application or patent”.7 What about Canada? In 2020, Dr. Thaler also filed a Canadian patent application for inventions generated by DABUS.8 The Canadian Intellectual Property Office (“CIPO”) issued a notice of non-compliance in 2021, establishing its initial position as follows: Because for this application the inventor is a machine and it does not appear possible for a machine to have rights under Canadian law or to transfer those rights to a human, it does not appear this application is compliant with the Patent Act and Rules.9 However, CIPO specified that it was open to receiving the applicant’s arguments on the issue, as follows: Responsive to the compliance notice, the applicant may attempt to comply by submitting a statement on behalf of the Artificial Intelligence (AI) machine and identify, in said statement, himself as the legal representative of the machine.10 To date, CIPO has issued no notice of abandonment and the application remains active. Its status in Canada is therefore unclear. It will be interesting to see whether Dr. Thaler will try to sway the Canadian courts to rule in his favour after many failed attempts in other jurisdictions around the world, and most recently in the UK Supreme Court. At first glance, the Patent Act11 (the “Act”) does not prevent an AI system from being recognized as the inventor of a patentable invention. In fact, the term “inventor” is not defined in the Act. Furthermore, nowhere is it stated that an applicant must be a “person,” nor is there any indication to that effect in the provisions governing the granting of patents. The Patent Rules12 offer no clarification in that regard either. The requirement implied by the clear use of the term “person” in the wording of the relevant sections of the law is important: It was a key consideration that the UK Supreme Court analyzed in Thaler.   Case law on the subject is still ambiguous. According to the Supreme Court of Canada, given that the inventor is the person who took part in conceiving the invention, the question to ask is “[W]ho is responsible for the inventive concept?”13 That said, however, we note that the conclusion reached was that a legal person—as opposed to a natural person—cannot be considered an inventor.14 The fact is that the Canadian courts have never had to rule on the specific issue of recognizing AI as an inventor, and until such time as the courts render a decision or the government takes a stance on the matter, the issue will remain unresolved. Conclusion Given that Canadian law is not clear on whether AI can be recognized as an inventor, now would be a good time for Canadian authorities to clarify the issue. As the UK Supreme Court has suggested, the place of AI in patent law is a current societal issue, one that the legislator will ultimately have to settle.15 As such, it is only a matter of time before the Act is amended or CIPO issues a directive. Moreover, in addition to having to decide whether AI legally qualifies as an inventor, Canadian authorities will have to determine whether a person can be granted rights to an invention that was actually created by AI. The question as to whether an AI system owner can hold a patent on an invention generated by their machine was raised in Thaler. Once again, unlike the UK’s patent act,16 our Patent Act does not close the door to such a possibility. Canadian legislation contains no comprehensive list of the categories of persons to whom a patent may be granted, for instance. If we were to rewrite the laws governing intellectual property, given that the main purpose such laws is to encourage innovation and creativity, perhaps a better approach would be to allow AI system owners to hold patent rights rather than recognizing the AI as an inventor. Patent rights are granted on the basis of an implicit understanding: A high level of protection is provided in exchange for sufficient disclosure to enable a person skilled in the art to reproduce an invention. This ensures that society benefits from such inventions and that inventors are rewarded. Needless to say, arguing that machines need such an incentive is difficult. Designating AI as an inventor and granting it rights in that respect is therefore at odds with the very purpose of patent protection. That said, an AI system owner who has invested time and energy in designing their system could be justified in claiming such protection for the inventions that it generates. In such a case and given the current state of the law, the legislator would likely have to intervene. Would this proposed change spur innovation in the field of generative AI? We are collectively investing a huge amount of “human” resources in developing increasingly powerful AI systems. Will there come a time when we can no longer consider that human resources were involved in making AI-generated technologies? Should it come to that, giving preference to AI system owners could become counterproductive. In any event, for the time being, a sensible approach would be to emphasize the role that humans play in AI-assisted inventions, making persons the inventors rather than AI. As concerns inventions conceived entirely by an AI system, trade secret protection may be a more suitable solution. The professionals on our intellectual property team are at your disposal to assist you with patent registration and provide you with a clearer understanding of the issues involved. [2023] UKSC 49 [Thaler]. Ibid., para. 56. See the decision of the United States Court of Appeals for the Federal Circuit in Thaler v Vidal, 43 F. 4th 1207 (2022), application for appeal to the Supreme Court of the United States dismissed. See the decision of the Boards of Appeal of the European Patent Office in J 0008/20 (Designation of inventor/DABUS) (2021), request to refer questions to the Enlarged Board of Appeal denied. See the decision of the Full Court of the Federal Court of Australia in Commissioner of Patents v Thaler, [2022] FCAFC 62, application for special leave to appeal to the High Court of Australia denied. ZA 2021/03242. Federal Register: Inventorship Guidance for AI-Assisted Inventions. CA 3137161. Notice from CIPO dated February 11, 2022, in Canadian patent application 3137161. Ibid. R.S.C., 1985, c. P-4. SOR/2019-251. Apotex Inc. v. Wellcome Foundation Ltd., 2002 SCC 77 at paras. 96–97. Sarnoff Corp. v. Canada (Attorney General), 2008 FC 712, para. 9. Thaler, paras. 48–49, 79. Ibid., para. 79.

    Read more
  2. The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  3. Smart product liability: issues and challenges

    Introduction In 2023, where do we stand in terms of liability where smart products are concerned? The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.  These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being. For the purposes of our analysis, the term “smart products” refers to products that have: Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control. Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings. Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2 These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”). Applying general law rules of liability to smart products Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law. The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products. There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4  But where does human accountability come in? Lack of transparency in AI and product liability When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5 The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8 Manufacturers have two means to absolve themselves from liability:9 A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge. This last means is specifically aimed at the risks inherent to technological innovation.10 That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability. Challenges in apportioning liability and human accountability There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor. In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability. Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.” There are two types of custodians of smart products: The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody); The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13. Either one of these custodians could be held liable should it contribute to the harm through its own fault. As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex. C-27: canadian bill on artificial intelligence Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act. If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15 Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights. In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17 Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18 Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”. Oversight: Providing Systems over which human oversight can be exercised. Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes. Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm. Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems. Robustness: Ensuring that Systems operate as intended. To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19 Conclusion Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries. That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country. Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu). GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq. Consumer Protection Act, CQLR c. P-40.1, s. 38. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72. Brousseau, para. 100. Brousseau, para. 102. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry. Bill C-27, summary and s. 5(1). The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca. Bill C-27, s. 39(a). AIDA, Companion document Bill C-27, s. 8. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.

    Read more
  4. Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  1. Lavery and the Fondation Montréal inc. launch a $15,000 grant for artificial intelligence

    Lavery and Fondation Montréal inc. are pleased to announce the creation of the Lavery AI Grant offered to start-ups in the field of artificial intelligence (AI). Valued at $15,000, grant winners will also have access to the full range of services provided by Fondation Montréal inc., as well as legal coaching by Lavery, tailored to the needs of young businesses in the artificial intelligence industry. The Lavery AI Grant is an annual grant and will be awarded each spring by Fondation Montréal inc. and Lavery to the start-up that has made the biggest impact in the area of artificial intelligence and that demonstrates great potential for growth. “With each passing day, Montréal is becoming the world city for artificial intelligence and six months ago, Lavery created an AI legal laboratory to analyze and predict the impact of AI in specific areas of the law, from intellectual property to the protection of personal information, including corporate governance and every aspect of business law. Our intention in creating this grant was to resolutely propel start-ups working in this activity sector and offer them legal guidance using the knowledge we developed in our laboratory,” stated Guillaume Lavoie, a partner and head of the Lavery CAPITAL group. “Young entrepreneurs are increasingly incorporating artificial intelligence into the core of their business model. We are happy that we can offer, in addition to the grant, services specific to this industry, thereby strengthening the role of Fondation Montréal inc. as a super connector with the business community,” remarked Liette Lamonde, Executive Director of Fondation Montréal inc.  Applicants can submit an application starting today through the Fondation Montréal inc. website (http://www.montrealinc.ca/en/lavery-ai-grant)

    Read more