Artificial Intelligence

Overview

AI is driving transformation across every sector. Understanding and anticipating the legal implications of AI in business is crucial.

We firmly believe that AI is not just a technology, but a strategic tool that can greatly enhance business efficiency and innovation when used properly. We support our clients in implementing responsible AI solutions that comply with legal requirements.

Data protection and privacy are core areas of our expertise. As AI gains ground, the amount of data being processed is growing exponentially, and governing its use is becoming increasingly complex. We are here to help you abide by current regulations and anticipate future regulatory trends.

When it comes to intellectual property, our experts can help you protect your AI-based innovations, manage the risks associated with AI-generated inventions and develop patenting and copyright strategies suited to the digital age.

With this in mind, the Lavery Legal Lab on Artificial Intelligence (L3IA) has positioned itself as a leader in the field, bringing its expertise and forward-thinking vision to the business community.

Through its multidisciplinary approach and vigilant monitoring, L3IA analyzes emerging trends and assesses the legal issues associated with AI. Our lab takes the lead in exploring the legal challenges that AI poses with respect to privacy, intellectual property, civil liability and corporate governance. We help businesses navigate the ever-changing regulatory framework and understand how AI can transform business models.

Our commitment to legal innovation and our in-depth understanding of AI enable us to support technology companies as they evolve, ensuring that their technological advances are both revolutionary and in perfect harmony with current and future legal frameworks.

Related expertise

  • Privacy
  • Intellectual property
  • Civil liability
  • Corporate governance
  1. AI: Where Do We Go From Here?

    In March 2017 – more than 3,000 days ago – Lavery established its Artificial Intelligence Legal Lab to study and, above all, anticipate developments in artificial intelligence. Quite innovative at the time, the goal of Lab was to position itself ahead of the legal complexities that artificial intelligence would bring for our clients. The number of developments in the field of AI since that date is astonishing. On May 19, 2025, Alexandre Sirois wondered in an article in La Presse[1] whether Montreal was still a leading hub for AI. He notably raised the question in the context of major AI investments made in recent years in other jurisdictions, citing, for instance, France, Germany, and Singapore. This timely question prompts reflection – have the massive research and development efforts and investments made in Quebec and Canada effectively translated into commercial advancements for the benefit of Canadian businesses, institutions, and customers? In other words, are we successfully transitioning from R&D in the field of AI to the production, commercialization, and industrialization of products and services in Canada that are highly distinctive, innovative, or competitive on the international scene? Does the legislative framework in Quebec and Canada sufficiently support technological advancements resulting from our AI investments, while also showcasing and maximizing the outcomes derived from the exceptional human talent present in our universities, research groups, institutions, and companies? As important as it is to protect privacy, personal information, data, and the public in general in the context of AI use, it is equally important to enable our entrepreneurs, start-ups, businesses, and institutions to strategically position themselves advantageously in this field – potentially the deciding factor between a prosperous society and one lagging behind others. At the other end of the spectrum, in The Technological Republic: Hard Power, Soft Belief, and the Future of the West, Alexander C. Karp and Nicholas W. Zamiska reflect on various topics involving technology, governance, and global power dynamics. They highlight concerns about the geopolitical consequences of technological complacency, notably criticizing major technology companies (mostly based in Silicon Valley) for developing AI technology with a focus on short-term gains rather than long-term innovation. They argue that these companies prioritize trivial applications, such as social media algorithms and e-commerce platforms, which serve as distractions from addressing critical societal challenges, instead of aligning with national or global human interests. From a Canadian legal perspective, this is both fascinating and thought-provoking. Amidst the swift evolution of international commercial relations, what pivotal role will Canada, and notably its innovative entrepreneurs, businesses, institutions, cutting-edge universities, and renowned groups, play in shaping our future? Can they seize their rightful place and lead the charge in the relentless march of future developments? In this context, is regulating AI from a national perspective the strategic and logical road to follow, or could an excess of regulations stifle Canadian businesses and entrepreneurs, hindering our chances in the high-stakes AI race? The head of Google’s Deepmind, Demis Hassabis, recently stated that greater international cooperation around AI regulation was needed, although this would be difficult to achieve given today’s geopolitical context[2]. Obviously, there is fierce competition on the global stage to come out on top in AI, and as in all areas or industrial revolutions where the potential for economic and social development is extraordinary, the degree of regulation and oversight can cause some countries and companies to take the lead (sometimes at the expense of the environment or human rights). Reflection on the subject, however necessary, must not lead to inaction. And proactivity with regard to artificial intelligence must not lead to negligence or carelessness.   We operate in a competitive world where the rules of engagement are far from universal. Even with the best intentions, we can unintentionally embrace technological solutions that conflict with our core values and long-term interests. Once such solutions gain a foothold, they become hard to remove. Recently, various applications have drawn attention for their data-collection practices and potential links to external entities, illustrating how swiftly popular platforms can become national debates over values, governance, and security. Even when these platforms have demonstrated links to foreign or hostile entities, they are hard to dislodge. In May 2025, after months spent pursuing a plan to convert itself into a for-profit business, OpenAI, Inc. decided to remain under the control of a non-profit organization[3]. Headquartered in California, OpenAI, Inc. aims to develop safe and beneficial artificial general intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work[4].” This decision followed a series of criticisms and legal challenges accusing OpenAI of drifting from its original mission of developing AI for the benefit of humanity. Bill C-27, known as the Digital Charter Implementation Act, 2022, was a legislative proposal in Canada aiming to overhaul federal privacy laws and introduce regulations for artificial intelligence (AI). It encompassed three primary components, including the Artificial Intelligence and Data Act (AIDA), intended to regulate the development and deployment of high-impact AI systems. This Act[5] would have required organizations to implement measures to identify, assess, and mitigate risks associated with AI, including potential harms and biases. It also proposed the establishment of an AI and Data Commissioner to support enforcement and outlined criminal penalties for the misuse of AI technologies. In addition, the Act would have established prohibitions related to the possession or use of personal information obtained illegally for designing, developing, using, or making available an AI system, as well as prohibitions against making available an AI system whose use causes serious harm to individuals. The failure to enact Bill C-27 left Canada’s federal privacy laws and AI regulations unchanged, maintaining the status quo established under PIPEDA and other general rules of civil and common law, as well as the Canadian Charter of Rights and Freedoms. This outcome has implications for Canada’s alignment with international privacy standards and its approach to AI governance. Stakeholders have expressed concerns about the adequacy of existing laws in addressing contemporary digital challenges and the potential impact on Canada’s global standing in data protection and AI innovation. In the current international context, advancements in artificial intelligence are set to be widespread in fields such as the military, healthcare, finance, aerospace, resource utilization, and, of course, law and justice. So, with AI, what direction do we take from here? We have the choice between deciding for ourselves – by strategically aligning our investments, R&D, and the efforts of our entrepreneurs – or allowing technological advancements, largely driven abroad, to determine our path forward.   [1] On a posé la question pour vous | Montréal est-il encore une plaque tournante en IA ? | La Presse [2] Google Deepmind CEO Says Global AI Cooperation 'Difficult' - Barron's [3] OpenAI reverses course and says its nonprofit will continue to control its business | Financial Post [4] The OpenAI Drama: What Is AGI And Why Should You Care? [5] The Artificial Intelligence and Data Act (AIDA) – Companion document

    Read more
  2. Provincial Budget 2025: Major Changes to the Tax Credit for the Development of E-Business (TCEB)

    In this bulletin, we will be discussing the TCEB as part of our series on the 2025 Quebec budget and corporate taxation. This particular tax credit aims to boost innovation and competitiveness in the digital marketplace by providing strategic tax assistance to businesses specializing in information and communication technologies. It was introduced to spur the growth of Quebec’s technology sectors through tax incentives granted to companies developing or integrating e-business solutions.  Before the 2025 Quebec budget reform, the TCEB comprised a 24% refundable tax credit, coupled with a 6% non-refundable tax credit. In 2024, the government began adjusting TCEB rates as part of the updated economic priorities, gradually reducing the refundable credit to 20% by 2028 and increasing the non-refundable credit to 10%. New adjustments were announced in the 2025 provincial budget to ensure that the incentives align more closely with the changing technological landscape, in particular by shifting the focus to the integration of recent emerging technologies, such as artificial intelligence (AI) and data processing and hosting. The new rules provide that only activities that incorporate artificial intelligence functionalities in a significant way will be eligible for the TCEB going forward. In addition, data processing and hosting services (NAICS 51821) have been added to the list of eligible activities, which shows the increasingly important role they play in today’s technological landscape. However, activities aimed at maintaining or upgrading information systems and technological infrastructure have been removed from the list, refocusing the program on cutting-edge technologies. Businesses engaged in inter-company outsourcing, mainly with subsidiaries of foreign companies, are particularly affected by the changes, in that credit rates will be reduced by half if the proportion of such outsourcing reaches 50% or more. The idea is to encourage those businesses to contribute more directly to the local economy and technological innovation in Quebec. The changes will apply to tax years beginning after December 31, 2025, but companies have the option of electing to apply them to tax years beginning after the budget presentation, provided the election is made before the end of the ninth month following the deadline by which they are required to file their tax returns. Read our first bulletin on the 2025 provincial budget titled “Provincial Budget 2025: New Refundable Tax Credit for Research, Innovation and Commercialization (CRIC)”

    Read more
  3. Businesses: Four tips to avoid dependency or vulnerability in your use of AI

    While the world is focused on how the tariff war is affecting various products, it may be overlooking the risks the war is posing to information technology. Yet, many businesses rely on artificial intelligence to provide their services, and many of these technologies are powered by large language models, such as the widely-used ChatGPT. It is relevant to ask whether businesses should rely on purely US-based technology service providers. There is talk of using Chinese alternatives, such as DeepSeek, but their use raises questions about data security and the associated control over information. Back in 2023, Professor Teresa Scassa wrote that, when it comes to artificial intelligence, sovereignty can take on many forms, such as state sovereignty, community sovereignty over data and individual sovereignty.1 Others have even suggested that AI will force the recalibration of international interests.2 In our current context, how can businesses protect themselves from the volatility caused by the actions of foreign governments? We believe that it’s precisely by exercising a certain degree of sovereignty over their own affairs that businesses can guard against such volatility. A few tips: Understand Intellectual property issues: Large language models underlying the majority of artificial intelligence technologies are sometimes offered under open-source licenses, but certain technologies are distributed under restrictive commercial licenses. It is important to understand the limits imposed by the licenses under which these technologies are offered. Some language model owners reserve the right to alter or restrict the technology’s functionality without notice. Conversely, permissive open-source licenses allow a language model to be used without time restrictions. From a strategic standpoint, businesses should keep intellectual property rights over their data compilations that can be integrated into artificial intelligence solutions. Consider other options: Whenever technology is used to process personal information, a privacy impact assessment is required by law before such technology is acquired, developed or redesigned.[3] Even if a privacy impact assessment is not legally required, it is prudent to assess the risks associated with technological choices. If you are dealing with a technology that your service provider integrates, check whether there are alternatives. Would you be able to quickly migrate to one of these if you faced issues? If you are dealing with custom solution, check whether it is limited to a single large language model. Adopt a modular approach: When a business chooses an external service provider to provide a large language model, it is often because the provider offers a solution that is integrated to other applications that the business already uses, or because it provides an application programming interface developed specifically for the business. In making such a choice, you should determine whether the service provider can replace the language model or application if problems were to arise. If the technology in question is a fully integrated solution from a service provider, find out whether the provider offers sufficient guarantees that it could replace a language model if it were no longer available. If it is a custom solution, find out whether the service provider can, right from the design stage, provide for the possibility of replacing one language model with another. Make a proportionate choice: Not all applications require the most powerful language models. If your technological objective is middle-of-the-road, you can consider more possibilities, including solutions hosted on local servers that use open-source language models. As a bonus, if you choose a language model proportionate to your needs, you are helping to reduce the environmental footprint of these technologies in terms of energy consumption.  These tips each require different steps to be put into practice. Remember to take legal considerations, in addition to technological constraints, into account. Licenses, intellectual property, privacy impact assessments and limited liability clauses imposed by certain service providers are all aspects that need to be considered before making any changes. This isn’t just about being prudent—it’s about taking advantage of the opportunity our businesses have to show they are technologically innovative and exercise greater control over their futures. Scassa, T. 2023. “Sovereignty and the governance of artificial intelligence.” 71 UCLA L. Rev. Disc. 214. Xu, W., Wang, S., & Zuo, X. 2025. “Whose victory? A perspective on shifts in US-China cross-border data flow rules in the AI era.” The Pacific Review, 1–27. See in particular the Act respecting the protection of personal information in the private sector, CQLR c. P-39.1, s. 3.3.

    Read more
  4. Can artificial intelligence be designated as an inventor in a patent application?

    Artificial intelligence (“AI”) is becoming increasingly sophisticated, and the fact that this human invention can now generate its own inventions opens the door to new ways of conceptualizing the notion of “inventor” in patent law. In a recent ruling, the Supreme Court of the United Kingdom (“UK Supreme Court”) however found that an artificial intelligence system cannot be the author of an invention within the meaning of the applicable regulations under which patents are granted. This position is consistent with that of several courts around the world that have already ruled on the issue. But what of Canada, where the courts have yet to address the matter? In this bulletin, we will take a look at the decisions handed down by the UK Supreme Court and its counterparts in other countries before considering Canada’s position on the issue. In Thaler (Appellant) v Comptroller-General of Patents, Designs and Trade Mark,1 the UK Supreme Court ruled that “an inventor must be a person”. Summary of the decision In 2018, Dr. Stephen Thaler filed patent applications for two inventions described as having been generated by an autonomous AI system. The machine in question, DABUS, was therefore designated as the inventor in the applications. Dr. Thaler claimed that, as the owner of DABUS, he was entitled to file patent applications for inventions generated by his machine. That being so, he alleged that he was not required to name a natural person as the inventor. Both the High Court of Justice and the Court of Appeal dismissed Dr. Thaler’s appeal from the decision of the Intellectual Property Office of the United Kingdom not to proceed with the patent applications, in particular because the designated inventor was not valid under the Patents Act 1977. The UK Supreme Court, the country’s final court of appeal, also dismissed Dr. Thaler’s appeal. In a unanimous decision, it concluded that the law is clear in that “an inventor within the meaning of the 1977 Act must be a natural person, and DABUS is not a person at all, let alone a natural person: it is a machine”.2 Although there was no doubt that DABUS had created the inventions in question, that did not mean that the courts could extend the notion of inventor, as defined by law, to include machines. An ongoing trend The UK Supreme Court is not the first to reject Dr. Thaler’s arguments. The United States,3 the European Union4 and Australia5 have adopted similar positions, concluding that only a natural person can qualify as an inventor within the meaning of the legislation applicable in their respective jurisdictions. The UK ruling is part of the Artificial Inventor Project’s cross-border attempt to ensure that the DABUS machine—and AI in general—is recognized as a generative tool capable of generating patent rights for the benefit of AI system owners. To date, only South Africa has issued a patent to Dr. Thaler, naming DABUS as the inventor.6 This country is the exception that proves the rule. It should however be noted that the Companies and Intellectual Property Commission of South Africa does not review applications on their merits. As such, no reason was given for considering AI as the inventor. More recently, in February of this year, the United States Patent and Trademark Office issued a guidance on AI-assisted inventions. The guidance confirms the judicial position and states in particular that “a natural person must have significantly contributed to each claim in a patent application or patent”.7 What about Canada? In 2020, Dr. Thaler also filed a Canadian patent application for inventions generated by DABUS.8 The Canadian Intellectual Property Office (“CIPO”) issued a notice of non-compliance in 2021, establishing its initial position as follows: Because for this application the inventor is a machine and it does not appear possible for a machine to have rights under Canadian law or to transfer those rights to a human, it does not appear this application is compliant with the Patent Act and Rules.9 However, CIPO specified that it was open to receiving the applicant’s arguments on the issue, as follows: Responsive to the compliance notice, the applicant may attempt to comply by submitting a statement on behalf of the Artificial Intelligence (AI) machine and identify, in said statement, himself as the legal representative of the machine.10 To date, CIPO has issued no notice of abandonment and the application remains active. Its status in Canada is therefore unclear. It will be interesting to see whether Dr. Thaler will try to sway the Canadian courts to rule in his favour after many failed attempts in other jurisdictions around the world, and most recently in the UK Supreme Court. At first glance, the Patent Act11 (the “Act”) does not prevent an AI system from being recognized as the inventor of a patentable invention. In fact, the term “inventor” is not defined in the Act. Furthermore, nowhere is it stated that an applicant must be a “person,” nor is there any indication to that effect in the provisions governing the granting of patents. The Patent Rules12 offer no clarification in that regard either. The requirement implied by the clear use of the term “person” in the wording of the relevant sections of the law is important: It was a key consideration that the UK Supreme Court analyzed in Thaler. Case law on the subject is still ambiguous. According to the Supreme Court of Canada, given that the inventor is the person who took part in conceiving the invention, the question to ask is “[W]ho is responsible for the inventive concept?”13 That said, however, we note that the conclusion reached was that a legal person—as opposed to a natural person—cannot be considered an inventor.14 The fact is that the Canadian courts have never had to rule on the specific issue of recognizing AI as an inventor, and until such time as the courts render a decision or the government takes a stance on the matter, the issue will remain unresolved. Conclusion Given that Canadian law is not clear on whether AI can be recognized as an inventor, now would be a good time for Canadian authorities to clarify the issue. As the UK Supreme Court has suggested, the place of AI in patent law is a current societal issue, one that the legislator will ultimately have to settle.15 As such, it is only a matter of time before the Act is amended or CIPO issues a directive. Moreover, in addition to having to decide whether AI legally qualifies as an inventor, Canadian authorities will have to determine whether a person can be granted rights to an invention that was actually created by AI. The question as to whether an AI system owner can hold a patent on an invention generated by their machine was raised in Thaler. Once again, unlike the UK’s patent act,16 our Patent Act does not close the door to such a possibility. Canadian legislation contains no comprehensive list of the categories of persons to whom a patent may be granted, for instance. If we were to rewrite the laws governing intellectual property, given that the main purpose such laws is to encourage innovation and creativity, perhaps a better approach would be to allow AI system owners to hold patent rights rather than recognizing the AI as an inventor. Patent rights are granted on the basis of an implicit understanding: A high level of protection is provided in exchange for sufficient disclosure to enable a person skilled in the art to reproduce an invention. This ensures that society benefits from such inventions and that inventors are rewarded. Needless to say, arguing that machines need such an incentive is difficult. Designating AI as an inventor and granting it rights in that respect is therefore at odds with the very purpose of patent protection. That said, an AI system owner who has invested time and energy in designing their system could be justified in claiming such protection for the inventions that it generates. In such a case and given the current state of the law, the legislator would likely have to intervene. Would this proposed change spur innovation in the field of generative AI? We are collectively investing a huge amount of “human” resources in developing increasingly powerful AI systems. Will there come a time when we can no longer consider that human resources were involved in making AI-generated technologies? Should it come to that, giving preference to AI system owners could become counterproductive. In any event, for the time being, a sensible approach would be to emphasize the role that humans play in AI-assisted inventions, making persons the inventors rather than AI. As concerns inventions conceived entirely by an AI system, trade secret protection may be a more suitable solution. The professionals on our intellectual property team are at your disposal to assist you with patent registration and provide you with a clearer understanding of the issues involved. [2023] UKSC 49 [Thaler]. Ibid., para. 56. See the decision of the United States Court of Appeals for the Federal Circuit in Thaler v Vidal, 43 F. 4th 1207 (2022), application for appeal to the Supreme Court of the United States dismissed. See the decision of the Boards of Appeal of the European Patent Office in J 0008/20 (Designation of inventor/DABUS) (2021), request to refer questions to the Enlarged Board of Appeal denied. See the decision of the Full Court of the Federal Court of Australia in Commissioner of Patents v Thaler, [2022] FCAFC 62, application for special leave to appeal to the High Court of Australia denied. ZA 2021/03242. Federal Register: Inventorship Guidance for AI-Assisted Inventions. CA 3137161. Notice from CIPO dated February 11, 2022, in Canadian patent application 3137161. Ibid. R.S.C., 1985, c. P-4. SOR/2019-251. Apotex Inc.v. Wellcome Foundation Ltd., 2002 SCC 77 at paras. 96–97. Sarnoff Corp. v. Canada (Attorney General), 2008 FC 712, para. 9. Thaler, paras. 48–49, 79. Ibid., para. 79.

    Read more