Artificial Intelligence

Overview

The rapid evolution of artificial intelligence (AI) is fundamentally transforming all sectors of the economy, and the legal field is no exception. As organizations increasingly integrate AI into their operations, the need for sound legal frameworks, ethical guidelines and strategic risk management has never been greater.

A strategic, multidisciplinary approach

Lavery’s AI expertise lies in taking a multidisciplinary approach that bridges law, technology and business. We recognize that AI is not simply a technological innovation, but a strategic tool that can significantly enhance business efficiency and innovation when properly regulated. The Lavery team helps businesses implement responsible AI solutions that comply with current legal requirements while anticipating future regulatory trends.

Our key areas of expertise:

  • Drafting of licence agreements and commercial agreements
  • Data protection and privacy
  • Intellectual property (IP) management
  • Corporate governance in an AI-driven world
  • Regulatory compliance and strategic advice

Lavery at the forefront of data protection and privacy practices

With AI being used to process increasing amounts of data, Lavery has made data protection and privacy a pillar of its AI legal services. The firm guides clients as they navigate the complexities of local and international data protection laws—such as Quebec’s Act respecting the protection of personal information in the private sector—and helps them anticipate future regulatory developments.

  • Lavery’s approach includes:
  • Conducting privacy impact assessments for AI projects
  • Providing advice on cross-border data flows and associated risks
  • Drafting and negotiating data processing agreements
  • Ensuring compliance with evolving privacy frameworks, including the proposed federal Artificial Intelligence and Data Act (AIDA)
  • Managing intellectual property in the age of AI

Your intellectual property protected

AI-powered innovation raises new questions about the ownership, protection and commercialization of intellectual property. Our experts can assist you in:

  • Protecting AI-generated innovations through patents, copyright and trade secrets
  • Managing the risks associated with AI-generated content
  • Developing IP strategies adapted to digital and data-driven business models

Governance and risk management

Our professionals can support you in the ethical and secure integration of AI tools into business environments with a focus on governance best practices. We can help you:

  • Implement internal policies and governance frameworks
  • Closely review licenses and terms of use for AI tools
  • Conduct ongoing risk assessments and adapt to regulatory changes
  • Address AI dependency and mitigate risk with your service providers

We advise businesses on how to avoid over-reliance on a single AI provider, especially when that provider is based outside Canada. It is important to assess alternative solutions, understand data sovereignty issues and maintain strategic control over technological assets.

The Lavery Legal Lab on Artificial Intelligence (L3IA)

L3IA, one of the first initiatives of its kind in Canada, was set up in March 2017 to anticipate and address the legal complexities arising from the integration of AI into business practices. Our lab’s mission is to stay on top of developments by continually monitoring emerging trends, assessing legal challenges and providing forward-thinking advice to clients.

What L3IA does:

  • Anticipate AI-related legal issues and develop proactive strategies for our clients
  • Monitor and ensure compliance with changing provincial, national and international laws and regulations
  • Develop and test new technological legal tools, including AI-based solutions to drive organizational efficiency for the firm and our clients

Concrete achievements and innovations

Lavery has developed an in-house AI solution inspired by OpenAI’s ChatGPT technology, marking a major step forward in the firm’s AI strategy. Unlike generic AI tools, Lavery’s solution is tailored to the specific needs of the firm’s legal practice and to the regulatory requirements that apply to it. The tool is trained using relevant legal content and operates within a framework governed by Lavery’s internal policies, guaranteeing both security and compliance.

Key features:

  • Secure, controlled environment for legal queries
  • Proprietary legal knowledge base
  • Compliance with data privacy and IP regulations
  • Support for internal decision-making and services to clients

This innovative tool enhances the efficiency of legal professionals working at Lavery, while demonstrating the firm’s commitment to responsible and ethical AI integration.


At Lavery, we combine foresight, innovation and a deep understanding of legal and technological dynamics to create a strong foundation of AI expertise. Our Legal Lab on Artificial Intelligence (L3IA) serves as a catalyst for AI research and development, driving practical applications of AI in the legal field. We provide our clients with robust data protection strategies, guidance on the ethical integration of AI, and a comprehensive range of services designed to leverage the benefits of AI while managing its risks.

  1. AI in the Courtroom: A Call to Order in Specter Aviation

    Eight quotes hallucinated by AI cost $5,000 for substantial breach (art. 342 C.C.P.) in the Specter Aviation case.1 While AI can improve access to justice, unverified AI use can lead to sanctions, adding to the risks unrepresented parties face. Quebec courts advocate for openness to AI, but with proper controls: AI is only useful when verified, traceable and supported by official sources. The cost of hallucinations On October 1, 2025, the Superior Court rendered judgment on a contestation to an application for homologation of an arbitral award rendered by the Paris International Arbitration Chamber (PIAC) on December 9, 2021. Under articles 645 and 646 C.C.P., the role of the Court in such a situation is limited to verifying whether one of article 646’s limiting grounds for refusal has been demonstrated. The applicant’s grounds—ultra vires, procedural irregularities, infringement of fundamental rights, public order, abuse of power—were deemed inappropriate and unconvincing. Although the decision is interesting in this respect, it is even more so in another one altogether. In his contestation, the unrepresented defendant relied on all possible support he could get from artificial intelligence. In response, the plaintiffs filed a table listing eight occurrences of non-existent citations, decisions never having been rendered, irrelevant references and inconsistent conclusions. Questioned at the hearing, the defendant did not deny that some references might have been hallucinated.2 In his judgment, Justice Morin turned the issue to principles. On one hand, access to justice requires a level playing field and the orderly and proportionate management of proceedings. On the other, even though unrepresented claimants or plaintiffs are given flexibility, never is forgery allowed: “Fabrication or shams cannot be tolerated to facilitate access to justice.”3 The Court therefore qualified the presentation of fictitious case law or fictitious quotes from authorities, whether intentionally or through simple negligence, as a serious breach that contravenes the solemnity that the act of filing of proceedings carries. It invoked article 342 C.C.P. to order the defendant to pay $5,000, to deter such conduct and protect the integrity of the process.4 Art. 342 C.C.P.: The power to punish substantial breaches Article 342 C.C.P. stems from the reform that was adopted in 2014 and came into force in 2016. Because this provision authorizes the court to impose a fair and reasonable sanction5 for significant breaches in the conduct of proceedings, it can be said to be punitive and dissuasive in nature. This power is distinct from the power granted by articles 51 to 54 C.C.P. which govern abuse of procedure, and an exception to the general regime of fees6 by which extrajudicial fees can be awarded, when warranted.7 A “substantial breach” must not simply be a trivial issue. It must reach a certain degree of seriousness, but it need not involve bad faith. It implies additional time and expense and contravenes the guiding principles of articles 18 to 20 C.C.P. (proportionality, control and cooperation).8 Nearly ten years later, case law illustrates a range of uses: $100,000 for the late filing of applications or amendments resulting in postponements and unnecessary work;9 $91,770.10 for a continuance on the morning of trial for failure to ensure the presence of a key witness;10 $10,000 for repeated delays, tardy amendment of proceedings and non-compliance with case management orders;11 $3,500 for the failure to or delay in disclosing evidence;12 $1,000 for filing an undisclosed statement in the middle of a hearing to take the opposing party by surprise.13 Sanctions and uses of AI in Canada and elsewhere Moreover, although the use of section 342 to sanction unverified use of technological tools appears to be a first in Quebec, a number of Canadian judgments have already imposed penalties for similar issues. In particular, they awarded: $200 in costs against an unrepresented party for having filed pleadings containing partially non-existent quotes to compensate for the time spent to make verifications.14 $100 in Federal Court, at the lawyer’s personal expense, for having quoted non-existent decisions generated by AI, without disclosing its use, further to the Kuehne + Nagel test.15 $1,000 before the Civil Resolution Tribunal of British Columbia to compensate for time needlessly spent dealing with clearly irrelevant, AI-generated arguments and documents in a case opposing two unrepresented parties.16 $500 and expungement of file containing AI-hallucinated authorities for non-compliance with the Federal Court’s AI policy.17 The $5,000 sanction ordered in this case was a deterrent; however, it is distinct from these other essentially compensatory amounts while in line an international trend, which can be observed  in the following cases: On June 22, 2023, in the United States (S.D.N.Y.), a Rule 11 penalty of USD 5,000 was imposed along with non-pecuniary measures (notice to client and judges falsely cited), in Mata v. Avianca, Inc.18 . On September 23, 2025, in Italy, a sum of €2,000 was awarded ex art. 96, co. 3 c.p.c. (€1,000 to the opposing party and €1,000 to Cassa delle ammende), plus €5,391 in legal costs (spese di lite), by the Tribunale di Latina.19 On August 15, 2025, in Australia, personal costs of AUD 8,371.30 were ordered against the plaintiff’s lawyer, with referral to the Legal Practice Board of Western Australia, following fictitious citations generated by AI (Claude, Copilot).20 On October 22, 2025, in the United States (E.D. Oklahoma), monetary penalties totaling $6,000 were imposed on attorneys personally. They were also required to repay fees of $23,495.90, and some of their pleadings were stricken from the record with the requirement to refile verified pleadings.21 In addition to monetary penalties, Quebec courts have already identified a number of problematic situations related to the use of AI, such as: The Régie du bâtiment du Québec had to examine a 191-page brief containing numerous non-existent references. The author finally admitted to having used ChatGPT to formulate them. The commissioner underscored the resulting work overload and the need to regulate the use of AI before the RBQ.22 In a commercial case, the Court suspected hallucinated references and dismissed them, ruling on the credible evidence.23 At the Administrative Housing Tribunal (AHT), a lessor who had read translations of the C.C.Q. obtained through ChatGPT—which distorted its meaning—saw his application dismissed. However, his conduct was not found to be abusive, as his good faith was recognized.24 Two related AHT decisions noted that an agreement (a “Lease Transfer and Co-Tenancy Agreement”) had been drafted with the help of ChatGPT, but the AHT simply analyzed them as it usually does (text, context, C.C.Q. rules) and concluded that there had been a deferred lease assignment, without drawing any particular consequence from the use of AI.25 At the Court of Québec, a litigant attributed a self-incriminating formulation in his application to ChatGPT; the Court dismissed his explanation.26 In an application to have evidence set aside, the applicant claimed that he thought he was obliged to respond to investigators after having done research on Google and ChatGPT regarding his duty to cooperate with the employer just prior to the interview. The Court noted that he had been clearly informed of his right to remain silent and that he could leave or consult a lawyer. It therefore concluded that there was no real constraint and allowed the statement.27 Openness to AI with proper controls, certainly, but with a caveat These are just a few of a long and growing list of cases across Canada and the world around. Despite this trend, the decision in Specter Aviation avoids stigmatizing AI. The Court rather insisted on remaining open to AI, pointing out that it must be used with proper controls, reminding us that a technology that facilitates access to justice must be welcomed and given proper controls, not proscribed.28 Openness to AI comes with clear requirements, such as those set out in the opinion published by the Superior Court on October 24, 2023. In the notice, the Superior Court called for caution, the use of reliable sources (court websites, recognized commercial publishers, established public services) and “meaningful human control” of generated content.29 The practice guides issued by various courts all point in the same direction: We should govern the use of AI without banning it. The Federal Court requires a declaration when a filed document contains AI-generated content and insists on “human in the loop” verification.30 The Court of Appeal of Quebec,31 the Court of Québec32 and the municipal courts33 have issued similar warnings: need for caution, authoritative sources, hyperlinks to recognized databases and full responsibility of the author. Nowhere is AI banned—all make its use conditional on verification and traceability. Some clues suggest that the judiciary itself is using artificial intelligence. In the Small Claims Division, on at least two occasions, a judge attached English translations generated by ChatGPT as a courtesy, specifying that they had no legal value and that the French version prevailed.34 In family law, a Superior Court decision in a family matter clearly used a Statistics Canada link identified by an AI tool (the URL includes “utm_source=chatgpt.com”), but the reasoning remains rooted in primary sources and case law: The AI was used as a search tool, not to provide a legal basis.35 A decision handed down on September 3, 2025, by the Commission d’accès à l’information is a particularly good illustration of openness with proper controls. In Breton c. MSSS,36 the court allowed exhibits containing content generated by Gemini and Copilot, because they were corroborated by relevant, primary sources that had already been filed (Journal des débats, newspaper excerpts, official websites). Despite art. 2857 C.C.Q. and the flexibility of administrative law, the Court reiterated that AI content is admissible if, and only if, it is verified, traceable and supported by official sources. AI that aims to please us and that we want to believe Two constants emerge from the sanctioned cases: excessive confidence in the AI’s reliability and underestimated risk of hallucination. In the United States, in Mata v. Avianca,37 the lawyers claimed that they believed that the tool could not invent cases. In Canada, in Hussein v. Canada,38 the plaintiff’s lawyer claims to have relied on an AI service in good faith, without fully realizing that it was necessary to check references. In Australia, in JNE24 v. Minister for Immigration and Citizenship,39 the court reported an over-reliance on tools (Claude, Copilot) and insufficient verification. In Quebec, the AHT found that a lessor had been misled by the use of artificial intelligence,40 while at the Administrative Labour Tribunal (ALT), ChatGPT-generated answers deemed to be approximately 92% accurate were used.41 These examples describe a generalized trust bias that is particularly risky for those who represent themselves: AI is perceived as a reliable way to gain speed, but in reality, it requires greater human control. Large language models are optimized to produce plausible and engaging responses; but, without proper controls, they tend to confirm user expectations rather than pointing out their own limitations.42 A notice published last April by OpenAI concerning an update that made its model “overly supportive” testifies to the underlying complexity of striking the right balance between engagement and preciseness.43 This makes it easier to understand how a quarrelsome litigant may have persuaded himself, based on an AI response, that he was entitled to personally sue a judge for judicial acts perceived as biased.44 Models trained to “please” or to keep users engaged can generate responses that, in the absence of legal contextualization, amplify erroneous or imprudent interpretations. Although AI service providers generally seek to limit their liability for the consequences of incorrect answers, the scope of such clauses is necessarily limited. When ChatGPT, Claude and Gemini apply legal principles to facts reported by a user, doesn’t the entity offering the service expose itself to the rules of public order that make such acts the exclusive prerogative of lawyers, which cannot be waived by a simple disclaimer? In Standing Buffalo Dakota First Nation v. Maurice Law, the Saskatchewan Court of Appeal reiterated that the prohibition on the practise of law applies to any “person" (including a corporation) and expressly contemplated that technological mediation would not change the analysis of what prohibited acts are.45 In Quebec, this principle is enshrined in section 128 of the Act respecting the Barreau du Québec and the Professional Code: general legal information is permitted, but individualized advice can only be provided by a lawyer. While some aberrant situations have involved lawyers, unrepresented claimants or plaintiffs appear to be the most exposed to the effects of AI. Should we focus on educating users first, or restrict certain uses? The tension between access to justice and protecting the public is quite obvious. Conclusion The Specter Aviation ruling confirms that artificial intelligence has its place in court, provided that rigorous controls are applied to it, and that it is useful when verified, but sanctionable when not. While AI offers unprecedented possibilities in terms of access to justice, combining it with public protection remains a major challenge. Despite this clear signal, containing over-reliance on tools designed to be engaging and supportive, and that claim to have an answer to everything, will remain a challenge for years to come. Specter Aviation Limited c. Laprade, 2025 QCCS 3521, online: https://canlii.ca/t/kfp2c Id., paras. [35], [53] Id. para. [43] Id. para. [60] Chicoine c. Vessia, 2023 QCCA 582, https://canlii.ca/t/jx19q, para. [20]; Gagnon c. Audi Canada inc., 2018 QCCS 3128, https://canlii.ca/t/ht3cb, paras. [43]–[48]; Layla Jet Ltd. c. Acass Canada Ltd., 2020 QCCS 667, https://canlii.ca/t/j5nt8, paras. [19]–[26]. Code of Civil Procedure, CQLR, c. C-25.01, arts. 339–341. Chicoine c. Vessia, supra. note 5, paras. [20]–[21]; Constellation Brands US Operations c. Société de vin internationale ltée, 2019 QCCS 3610, https://canlii.ca/t/j251v, paras. [47]–[52]; Webb Electronics Inc. c. RRF Industries Inc., 2023 QCCS 3716, https://canlii.ca/t/k0fq8, paras. [39]–[48]. 9401-0428 Québec inc. c. 9414-8442 Québec inc., 2025 QCCA 1030, https://canlii.ca/t/kdz4h, paras. [82]–[87]; Biron c. 150 Marchand Holdings inc., 2020 QCCA 1537, https://canlii.ca/t/jbnj2, para. [100]; Groupe manufacturier d’ascenseurs Global Tardif inc. c. Société de transport de Montréal, 2023 QCCS 1403, https://canlii.ca/t/jx042, para. [26]. Groupe manufacturier d’ascenseurs Global Tardif inc. c. Société de transport de Montréal, supra. note 8, paras. [58]–[61] ($100,000 to Global Tardif, $60,000 to Intact Assurance, $40,000 to Fujitec, all as legal costs awarded under art. 342 C.C.P.); see also $20,000 for an application for an amendment made on the 6th day of a trial, forcing a continuance: Paradis c. Dupras Ledoux inc., 2024 QCCS 3266, https://canlii.ca/t/k6q26, paras. [154]–[171]; Webb Electronics Inc. c. RRF Industries Inc., supra. note 7. Layla Jet Ltd. c. Acass Canada Ltd, supra note 5, paras. [23]–[28]. Électro-peintres du Québec inc. c. 2744-3563 Québec inc., 2023 QCCS 1819, https://canlii.ca/t/jxfn0, paras. [18]–[22], [35]–[38]; see also Constant c. Larouche, 2020 QCCS 2963, https://canlii.ca/t/j9rwt, paras. [37]–[40] (repeated delays in adhering to undertakings despite an order, sanction: $5,000). Constellation Brands US Operations c. Société de vin internationale ltée, supra. note 7, paras. [39]–[43], [47]–[52]; see also AE Services et technologies inc. c. Foraction inc. (Ville de Sainte-Catherine), 2024 QCCS 242, https://canlii.ca/t/k2jvm (repeated delays in transmitting promised documentation and breach of an undertaking before the court; compensation of $3,000). Gagnon c. SkiBromont.com, 2024 QCCS 3246, https://canlii.ca/t/k6mzz, paras. [29]–[37], [41]. J.R.V. v. N.L.V., 2025 BCSC 1137, https://canlii.ca/t/kcsnc, paras. [51]–[55]. Hussein v. Canada (IRCC), 2025 FC 1138, https://canlii.ca/t/kctz0, paras. [15]–[17], applying Kuehne + Nagel Inc. v. Harman Inc, 2021 FC 26, https://canlii.ca/t/jd4j6, paras. [52]–[55] (reiterating the principles of Young v. Young and the two-step test: (1) conduct causing costs to be incurred; (2) discretionary decision to impose costs personally). AQ v. BW, 2025 BCCRT 907, https://canlii.ca/t/kd08x, paras. [15]–[16], [38]–[40]. Lloyd's Register Canada Ltd. v. Choi, 2025 FC 1233, https://canlii.ca/t/kd4w2 Mata v. Avianca, Inc, No. 22-cv-1461 (PKC) (S.D.N.Y. June 22, 2023) (sanctions order), online: Justia https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/ Tribunale di Latina (giud. Valentina Avarello), sentenza 23 septembre 2025, Atto redatto con intelligenza artificiale a stampone, con scarsa qualità e mancanza di pertinenza: sì alla condanna ex art. 96 c.p.c., La Nuova Procedura Civile (september 29, 2025), online: https://www.lanuovaproceduracivile.com/atto-redatto-con-intelligenza-artificiale-a-stampone-con-scarsa-qualita-e-mancanza-di-pertinenza-si-alla-condanna-ex-art-96-c-p-c-dice-tribunale-di-latina/ Australia, Federal Circuit and Family Court of Australia (Division 2), JNE24 v. Minister for Immigration and Citizenship, [2025] FedCFamC2G 1314 (August 15, 2025), Gerrard J, online: AustLII https://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/FedCFamC2G/2025/1314.html United States, District Court for the Eastern District of Oklahoma, Mattox v. Product Innovations Research, LLC d/b/a Sunevolutions; Cosway Company, Inc.; and John Does 1–3, No. 6:24-cv-235-JAR, Order (October 22, 2025), online: Eastern District of Oklahoma https://websitedc.s3.amazonaws.com/documents/Mattox_v._Product_Innovations_Research_USA_22_October_2025.pdf Régie du bâtiment du Québec c. 9308-2469 Québec inc. (Éco résidentiel), 2025 QCRBQ 86, online: https://canlii.ca/t/kfdfg, paras. [159]–[167]. Blinds to Go Inc. c. Blachley, 2025 QCCS 3190, online: https://canlii.ca/t/kf963 para. [57] and n. 22. Lozano González c. Roberge, 2025 QCTAL 15786, online: https://canlii.ca/t/kc2w9 paras. [7], [17]–[19]. Marna c. BKS Properties Ltd, 2025 QCTAL 34103, online: https://canlii.ca/t/kfq8n paras. [18], [21]–[25]; Campbell c. Marna, 2025 QCTAL 34105, online: https://canlii.ca/t/kfq81 paras. [18], [21]–[25]. Morrissette c. R., 2023 QCCQ 12018, online: https://canlii.ca/t/k3x5j para. [43]. Léonard c. Agence du revenu du Québec, 2025 QCCQ 2599, online: https://canlii.ca/t/kcxsb paras. [58]–[64]. Specter Aviation Limited c. Laprade, supra. note 1, para. [46]. Superior Court of Quebec, “Notice to Profession and Public – Integrity of Court Submissions When Using Large Language Models,” October 24, 2023, online: https://coursuperieureduquebec.ca/fileadmin/cour-superieure/Districts_judiciaires/Division_Montreal/Communiques/Avis_a_la_Communite_juridique-Utilisation_intelligence_artificielle_EN_October_24_2023.pdf Federal Court, “Notice to the Parties and the Profession – The Use of Artificial Intelligence in Court Proceedings,” December 20, 2023, online: https://www.fct-cf.ca/Content/assets/pdf/base/2023-12-20-notice-use-of-ai-in-court-proceedings.pdf; Federal Court, Update – The Use of Artificial Intelligence in Court Proceedings, May 7, 2024, online: https://www.fct-cf.ca/Content/assets/pdf/base/FC-Updated-AI-Notice-EN.pdf Court of Appeal of Quebec, “Notice Respecting the Use of Artificial Intelligence Before the Court of Appeal”, August 8, 2024, online: https://courdappelduquebec.ca/fileadmin/dossiers_civils/avis_et_formulaires/eng/avis_utilisation_intelligence_articielle_ENG.pdf Court of Québec, “Notice to the legal community and the public – Maintaining the integrity of submissions before the Court when using large language models,” January 26, 2024, online: https://courduquebec.ca/fileadmin/cour-du-quebec/centre-de-documentation/toutes-les-chambres/en/NoticeIntegriteObservationsCQ_LLM_en.pdf Cours municipales du Québec, Avis à la profession et au public – Maintenir l’intégrité des observations à la Cour lors de l’utilisation de grands modèles de langage, December 18, 2023, online: https://coursmunicipales.ca/fileadmin/cours_municipales_du_quebec/pdf/Document_d_information/CoursMun_AvisIntegriteObservations.pdf Bricault c. Rize Bikes Inc., 2024 QCCQ 609, online: https://canlii.ca/t/k3lcd n. 1; Brett c. 9187-7654 Québec inc, 2023 QCCQ 8520, online : https://canlii.ca/t/k1dpr, n. 1. Droit de la famille – 251297, 2025 QCCS 3187, online: https://canlii.ca/t/kf96f paras. [138]–[141]. Breton c. Ministère de la Santé et des Services sociaux, 2025 QCCAI 280, online: https://canlii.ca/t/kftlz, paras. [24]–[26], [31]. Mata v. Avianca, Inc, supra note 18. Hussein v. Canada (IRCC), 2025 FC 1138, supra note 15, paras. [15]–[17]. JNE24 v. Minister for Immigration and Citizenship, supra note 20. Lozano González c. Roberge, supra note 24, para. [17]. Pâtisseries Jessica inc. et Chen, 2024 QCTAT 1519, online: https://canlii.ca/t/k4f96, paras. [34]–[36]. See Emilio Ferrara, “Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models” (2023), SSRN 4627814, online: https://doi.org/10.2139/ssrn.4627814; Isabel O. Gallegos et al, “Bias and Fairness in Large Language Models: A Survey” (2024) 50:3 Computational Linguistics 1097, doi: 10.1162/coli_a_00524. See OpenAI, “Sycophancy in GPT-4o: what happened and what we're doing about it,” April 29, 2025, online: https://openai.com/research/sycophancy-in-gpt-4o; see also “Expanding on what we missed with sycophancy,” May 2, 2025, online: https://openai.com/index/expanding-on-sycophancy/ [44]Verreault c. Gagnon, 2023 QCCS 4922, online: https://canlii.ca/t/k243v, paras. [16], [28]. Standing Buffalo Dakota First Nation v. Maurice Law Barristers and Solicitors (Ron S. Maurice Professional Corporation), 2024 SKCA 14, online: https://canlii.ca/t/k2wn9 paras. [37]–[40], [88]–[103].

    Read more
  2. Export controls: implications in a world of knowledge sharing

    Introduction When we hear the term “export controls,” we may think it only applies to weapons and other highly sensitive technologies, but that is not the case. There are a multitude of circumstances—some unexpected—to which it is important to know that export controls apply. This is especially true if you are involved in research or in the design and development of seemingly innocuous solutions that are not necessarily tangible objects. Today, technological knowledge is shared not only through conventional partnerships between businesses or universities, but also through data sharing or access to databases that feed large language models. Artificial intelligence is, in itself, a means of sharing knowledge. Feeding such algorithms with sensitive data, or data that can become sensitive when combined, carries a risk of violating the applicable legal framework. Here are some key concepts. Overview of the federal export control framework The Export and Import Permits Act In Canada, the Export and Import Permits Act (the “EIPA”) establishes the primary framework governing the export of controlled goods and technologies. The EIPA gives the Minister of Foreign Affairs the power to issue, to any resident of Canada who applies for one, a permit authorizing the export or transfer of a wide range of items included on the Export Control List (the “ECL”) or destined for a country listed on the Area Control List. In other words, the EIPA regulates, and at times prohibits, the trade of critical goods and technologies outside Canada. The Export Control List To get the full picture of the ECL, we need to refer to the Guide to Canada's Export Control Listas published by the Department with its successive amendments, the most recent of which date back to May 2025 (the “Guide”). In summary, the Guide includes military goods and technologies, strategic goods and dual-use (civilian and military) goods and technology that are controlled in accordance with Canada’s commitments made in multilateral regimes, such as the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, bilateral agreements, and certain unilateral controls implemented by Canada as part of its defence policy. The Guide also includes forest products, agricultural and food products, apparel goods and vehicles. Other laws that affect exports Also to take into account are the sanctions that Canada imposes under laws that affect exports, such as: the United Nations Act the Special Economic Measures Act the Justice for Victims of Corrupt Foreign Officials Act These sanctions against specific countries, organizations or persons include a number of measures, including restricting or prohibiting trade, financial transactions or other economic activities with Canada, or the freezing of property located in Canada.1 Finally, in order for an individual (or an organization) to transfer controlled goods outside Canada, they must register with the Controlled Goods Program (the “CGP”) to obtain an export permit, unless exempt. Key concepts Did you know? Certain goods and technologies are referred to as “dual-use” goods and technologies. This means that even though they were initially designed for civilian use or appear harmless, they may be subject to export controls if they can be used for military purposes or to produce military items. A “technology” is broadly defined to include technical data, technical assistance and information necessary for the development, production or use of an item listed on the ECL. Also included in this notion, albeit indirectly, are the technologies referred to in any of the regulations associated with the laws listed above, which make certain countries subject to specific technology transfer restrictions. A “transfer” in relation to a technology, means to dispose of it (e.g. sell it) or disclose its content in any manner from a place in Canada to a place outside Canada. This definition stems from legislative amendments to the EIPA, which expanded the scope of the law to include the mere transfer of intangible technologies by various means, thereby broadening the circumstances to which permits apply as regards transfers.2 Regarding trade relations with the United States, Canadian exporters may face additional restrictions and considerable challenges, particularly in situations where their employees or other stakeholders involved are foreign nationals.The International Traffic in Arms Regulations (“ITAR”) and the Export Administration Regulations(“EAR”) are two key sets of rules that govern exports from the United States.3 They protect both similar and distinct interests. While the ITAR aim to protect defence articles and defence services (including weapons and information), the EAR govern dual-use items.4 Both prevent exports5 in a broad sense, i.e., up to and including the transfer of information to so-called “foreign” persons, except with the permission of the authorities. It is thus quite possible that Canadian exporters will be required to comply with these American regulations, which, in addition to targeting territories, target the national origin of individuals. This is diametrically opposed to Canada’s export regime, which rather centres on prohibiting trade with a country or anyone located there. In this regard, note that Quebec’s Charter of Human Rights and Freedoms considers national origin to be a ground for discrimination. 6 A Quebec business can thus find itself struggling to balance its contractual obligations under a contract with an American company with the requirements of the Quebec Charter. Artificial intelligence: novel challenges The development of large language models in the field of artificial intelligence represents a new challenge from an export control standpoint, and a significant one at that. For example, if a large language model is trained using restricted data, a state subject to the aforementioned sanctions might attempt to use the large language model to indirectly obtain information to which it would not otherwise have had direct access. As a result, training a large language model on plans, technical specifications or textual descriptions of technologies covered by transfer restrictions (which can include knowledge transfers) can create a risk of non-compliance with the law. The same applies to accessing such data for retrieval-augmented generation, a widely used technique to expand and improve large language model responses. To limit the risk during research and development, a company that trains a large language models on such data or allows access to such data for retrieval-augmented generation will need to consider where the data will be hosted and processed. Similarly, once the artificial intelligence application is developed, it will be important to restrict access to it in a manner consistent with the law, both in terms of locating the servers on which the large language model will be installed and in terms of user access. Sanctions Any person or organization that contravenes any provision of the EIPA or its regulations commits an offence punishable by fine and/or imprisonment, as applicable. Also, failure to register with the CGPmay constitute an offence under federal laws that can lead to prosecution and substantial sanctions against the offender(s).7 Conclusion Canada’s export controls are quite complex, not only in how they are structured, but also in how they must be implemented. With the changing geopolitical and commercial landscape, it is advisable to periodically read the resources made available by the relevant authorities and put in place appropriate policies and measures, or to seek professional advice in this regard. Government of Canada, “Types of sanctions” (date modified: 2024-09-10): Types of sanctions Martha L. Harrison & Tonya Hughes, “Understanding Exports: A Primer on Canada’s Export Control Regime” (2010) 8(2) Canadian International Lawyer, 97 The ITAR and EAR are included in the Code of Federal Regulations (“CFR”). Austin D. Michel, “Hiring in the Export-Control Context: A Framework to Explain How Some Institutions of High Education Are Discriminating against Job Applicants” (2021) 106:4 Iowa L Review, 1993 The ITAR and EAR also provide for restrictions on re-exportation. See Maroine Bendaoud, “Quand la sécurité nationale américaine fait fléchir le principe de non-discrimination en droit canadien : le cas de l'International Traffic in Arms Regulations (ITAR)” (2013) Les cahiers de droit, 54 (2–3), 549 Government of Canada, “Guideline on Controlled Goods Program registration” (date modified: 2025-05-08): Guideline on Controlled Goods Program registration – Canada.ca

    Read more
  3. Data Anonymization: Not as Simple as It Seems

    Blind spots to watch for when anonymizing data Anonymization has become a crucial step in unlocking the value of data for innovation, particularly in artificial intelligence. But without a properly executed anonymization process, organizations risk financial penalties, legal action and serious reputational harm, with potentially significant consequences for their operations. Understanding the anonymization process What the law says Under Quebec’s Act respecting the protection of personal information in the private sector (the “Private Sector Act”) and the Act respecting Access to documents held by public bodies and the Protection of personal information (the “Access Act”), information concerning a natural person is considered anonymized if it irreversibly no longer allows the person to be identified directly or indirectly. Since anonymized information no longer qualifies as personal information, this distinction is of crucial importance. However, beyond this definition, neither Act provides details on how anonymization should actually be performed. To fill this gap, the government adopted the Regulation respecting the anonymization of personal information (the “Regulation”), which sets out the criteria and framework for anonymization, grounded in high standards of privacy protection. What organizations need to know before starting Under the Regulation, before beginning any anonymization process, organizations must clearly define the “serious and legitimate purposes” for which the data will be used. These purposes must comply with either the Private Sector Act or the Access Act, as applicable, and any new purpose must meet the same requirement. The process must also be supervised by a qualified professional with the expertise to select and apply appropriate anonymization techniques. This supervision ensures both the proper implementation of the chosen methods and the ongoing validation of technological choices and security measures. The four key steps of data anonymization   DepersonalizationThe first step is to remove or replace all personal identifiers, such as names, addresses and phone numbers, with pseudonyms. It is essential to anticipate how different data sets might interact, in order to minimize the risk of re-identifying individuals through cross-referencing. Preliminary risk assessmentNext comes a preliminary analysis of re-identification risks. This step relies on three main criteria: individualization (inability to isolate a person within a dataset), correlation (inability to connect datasets concerning the same person) and inference (inability to infer personal information from other available information). Common anonymization techniques include aggregation, deletion, generalization and data perturbation. Organizations should also apply strong protective measures, such as advanced encryption and restrictive access controls, to minimize the likelihood of re-identification. In-depth risk analysisAfter the preliminary phase, a deeper risk analysis must be conducted. While no anonymization process can eliminate all risk, that risk must be reduced to the lowest possible level, taking into account factors such as data sensitivity, the availability of public datasets and the effort required to attempt re-identification. To sustain this low level of risk, organizations should perform periodic reassessments that account for technological advances that could make re-identification easier over time. Documentation and record-keepingFinally, organizations must keep a detailed record describing the anonymized information, its intended purposes, the techniques and security measures used, and the dates of any analyses or updates. This documentation strengthens transparency and demonstrates that the organization has fulfilled its legal obligations regarding anonymization.

    Read more
  4. AI: Where Do We Go From Here?

    In March 2017 – more than 3,000 days ago – Lavery established its Artificial Intelligence Legal Lab to study and, above all, anticipate developments in artificial intelligence. Quite innovative at the time, the goal of Lab was to position itself ahead of the legal complexities that artificial intelligence would bring for our clients. The number of developments in the field of AI since that date is astonishing. On May 19, 2025, Alexandre Sirois wondered in an article in La Presse[1] whether Montreal was still a leading hub for AI. He notably raised the question in the context of major AI investments made in recent years in other jurisdictions, citing, for instance, France, Germany, and Singapore. This timely question prompts reflection – have the massive research and development efforts and investments made in Quebec and Canada effectively translated into commercial advancements for the benefit of Canadian businesses, institutions, and customers? In other words, are we successfully transitioning from R&D in the field of AI to the production, commercialization, and industrialization of products and services in Canada that are highly distinctive, innovative, or competitive on the international scene? Does the legislative framework in Quebec and Canada sufficiently support technological advancements resulting from our AI investments, while also showcasing and maximizing the outcomes derived from the exceptional human talent present in our universities, research groups, institutions, and companies? As important as it is to protect privacy, personal information, data, and the public in general in the context of AI use, it is equally important to enable our entrepreneurs, start-ups, businesses, and institutions to strategically position themselves advantageously in this field – potentially the deciding factor between a prosperous society and one lagging behind others. At the other end of the spectrum, in The Technological Republic: Hard Power, Soft Belief, and the Future of the West, Alexander C. Karp and Nicholas W. Zamiska reflect on various topics involving technology, governance, and global power dynamics. They highlight concerns about the geopolitical consequences of technological complacency, notably criticizing major technology companies (mostly based in Silicon Valley) for developing AI technology with a focus on short-term gains rather than long-term innovation. They argue that these companies prioritize trivial applications, such as social media algorithms and e-commerce platforms, which serve as distractions from addressing critical societal challenges, instead of aligning with national or global human interests. From a Canadian legal perspective, this is both fascinating and thought-provoking. Amidst the swift evolution of international commercial relations, what pivotal role will Canada, and notably its innovative entrepreneurs, businesses, institutions, cutting-edge universities, and renowned groups, play in shaping our future? Can they seize their rightful place and lead the charge in the relentless march of future developments? In this context, is regulating AI from a national perspective the strategic and logical road to follow, or could an excess of regulations stifle Canadian businesses and entrepreneurs, hindering our chances in the high-stakes AI race? The head of Google’s Deepmind, Demis Hassabis, recently stated that greater international cooperation around AI regulation was needed, although this would be difficult to achieve given today’s geopolitical context[2]. Obviously, there is fierce competition on the global stage to come out on top in AI, and as in all areas or industrial revolutions where the potential for economic and social development is extraordinary, the degree of regulation and oversight can cause some countries and companies to take the lead (sometimes at the expense of the environment or human rights). Reflection on the subject, however necessary, must not lead to inaction. And proactivity with regard to artificial intelligence must not lead to negligence or carelessness.   We operate in a competitive world where the rules of engagement are far from universal. Even with the best intentions, we can unintentionally embrace technological solutions that conflict with our core values and long-term interests. Once such solutions gain a foothold, they become hard to remove. Recently, various applications have drawn attention for their data-collection practices and potential links to external entities, illustrating how swiftly popular platforms can become national debates over values, governance, and security. Even when these platforms have demonstrated links to foreign or hostile entities, they are hard to dislodge. In May 2025, after months spent pursuing a plan to convert itself into a for-profit business, OpenAI, Inc. decided to remain under the control of a non-profit organization[3]. Headquartered in California, OpenAI, Inc. aims to develop safe and beneficial artificial general intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work[4].” This decision followed a series of criticisms and legal challenges accusing OpenAI of drifting from its original mission of developing AI for the benefit of humanity. Bill C-27, known as the Digital Charter Implementation Act, 2022, was a legislative proposal in Canada aiming to overhaul federal privacy laws and introduce regulations for artificial intelligence (AI). It encompassed three primary components, including the Artificial Intelligence and Data Act (AIDA), intended to regulate the development and deployment of high-impact AI systems. This Act[5] would have required organizations to implement measures to identify, assess, and mitigate risks associated with AI, including potential harms and biases. It also proposed the establishment of an AI and Data Commissioner to support enforcement and outlined criminal penalties for the misuse of AI technologies. In addition, the Act would have established prohibitions related to the possession or use of personal information obtained illegally for designing, developing, using, or making available an AI system, as well as prohibitions against making available an AI system whose use causes serious harm to individuals. The failure to enact Bill C-27 left Canada’s federal privacy laws and AI regulations unchanged, maintaining the status quo established under PIPEDA and other general rules of civil and common law, as well as the Canadian Charter of Rights and Freedoms. This outcome has implications for Canada’s alignment with international privacy standards and its approach to AI governance. Stakeholders have expressed concerns about the adequacy of existing laws in addressing contemporary digital challenges and the potential impact on Canada’s global standing in data protection and AI innovation. In the current international context, advancements in artificial intelligence are set to be widespread in fields such as the military, healthcare, finance, aerospace, resource utilization, and, of course, law and justice. So, with AI, what direction do we take from here? We have the choice between deciding for ourselves – by strategically aligning our investments, R&D, and the efforts of our entrepreneurs – or allowing technological advancements, largely driven abroad, to determine our path forward.   [1] On a posé la question pour vous | Montréal est-il encore une plaque tournante en IA ? | La Presse [2] Google Deepmind CEO Says Global AI Cooperation 'Difficult' - Barron's [3] OpenAI reverses course and says its nonprofit will continue to control its business | Financial Post [4] The OpenAI Drama: What Is AGI And Why Should You Care? [5] The Artificial Intelligence and Data Act (AIDA) – Companion document

    Read more