AI in the Courtroom: A Call to Order in Specter Aviation
Eight quotes hallucinated by AI cost $5,000 for substantial breach (art. 342 C.C.P.) in the Specter Aviation case.1 While AI can improve access to justice, unverified AI use can lead to sanctions, adding to the risks unrepresented parties face. Quebec courts advocate for openness to AI, but with proper controls: AI is only useful when verified, traceable and supported by official sources. The cost of hallucinations On October 1, 2025, the Superior Court rendered judgment on a contestation to an application for homologation of an arbitral award rendered by the Paris International Arbitration Chamber (PIAC) on December 9, 2021. Under articles 645 and 646 C.C.P., the role of the Court in such a situation is limited to verifying whether one of article 646’s limiting grounds for refusal has been demonstrated. The applicant’s grounds—ultra vires, procedural irregularities, infringement of fundamental rights, public order, abuse of power—were deemed inappropriate and unconvincing. Although the decision is interesting in this respect, it is even more so in another one altogether. In his contestation, the unrepresented defendant relied on all possible support he could get from artificial intelligence. In response, the plaintiffs filed a table listing eight occurrences of non-existent citations, decisions never having been rendered, irrelevant references and inconsistent conclusions. Questioned at the hearing, the defendant did not deny that some references might have been hallucinated.2 In his judgment, Justice Morin turned the issue to principles. On one hand, access to justice requires a level playing field and the orderly and proportionate management of proceedings. On the other, even though unrepresented claimants or plaintiffs are given flexibility, never is forgery allowed: “Fabrication or shams cannot be tolerated to facilitate access to justice.”3 The Court therefore qualified the presentation of fictitious case law or fictitious quotes from authorities, whether intentionally or through simple negligence, as a serious breach that contravenes the solemnity that the act of filing of proceedings carries. It invoked article 342 C.C.P. to order the defendant to pay $5,000, to deter such conduct and protect the integrity of the process.4 Art. 342 C.C.P.: The power to punish substantial breaches Article 342 C.C.P. stems from the reform that was adopted in 2014 and came into force in 2016. Because this provision authorizes the court to impose a fair and reasonable sanction5 for significant breaches in the conduct of proceedings, it can be said to be punitive and dissuasive in nature. This power is distinct from the power granted by articles 51 to 54 C.C.P. which govern abuse of procedure, and an exception to the general regime of fees6 by which extrajudicial fees can be awarded, when warranted.7 A “substantial breach” must not simply be a trivial issue. It must reach a certain degree of seriousness, but it need not involve bad faith. It implies additional time and expense and contravenes the guiding principles of articles 18 to 20 C.C.P. (proportionality, control and cooperation).8 Nearly ten years later, case law illustrates a range of uses: $100,000 for the late filing of applications or amendments resulting in postponements and unnecessary work;9 $91,770.10 for a continuance on the morning of trial for failure to ensure the presence of a key witness;10 $10,000 for repeated delays, tardy amendment of proceedings and non-compliance with case management orders;11 $3,500 for the failure to or delay in disclosing evidence;12 $1,000 for filing an undisclosed statement in the middle of a hearing to take the opposing party by surprise.13 Sanctions and uses of AI in Canada and elsewhere Moreover, although the use of section 342 to sanction unverified use of technological tools appears to be a first in Quebec, a number of Canadian judgments have already imposed penalties for similar issues. In particular, they awarded: $200 in costs against an unrepresented party for having filed pleadings containing partially non-existent quotes to compensate for the time spent to make verifications.14 $100 in Federal Court, at the lawyer’s personal expense, for having quoted non-existent decisions generated by AI, without disclosing its use, further to the Kuehne + Nagel test.15 $1,000 before the Civil Resolution Tribunal of British Columbia to compensate for time needlessly spent dealing with clearly irrelevant, AI-generated arguments and documents in a case opposing two unrepresented parties.16 $500 and expungement of file containing AI-hallucinated authorities for non-compliance with the Federal Court’s AI policy.17 The $5,000 sanction ordered in this case was a deterrent; however, it is distinct from these other essentially compensatory amounts while in line an international trend, which can be observed in the following cases: On June 22, 2023, in the United States (S.D.N.Y.), a Rule 11 penalty of USD 5,000 was imposed along with non-pecuniary measures (notice to client and judges falsely cited), in Mata v. Avianca, Inc.18 . On September 23, 2025, in Italy, a sum of €2,000 was awarded ex art. 96, co. 3 c.p.c. (€1,000 to the opposing party and €1,000 to Cassa delle ammende), plus €5,391 in legal costs (spese di lite), by the Tribunale di Latina.19 On August 15, 2025, in Australia, personal costs of AUD 8,371.30 were ordered against the plaintiff’s lawyer, with referral to the Legal Practice Board of Western Australia, following fictitious citations generated by AI (Claude, Copilot).20 On October 22, 2025, in the United States (E.D. Oklahoma), monetary penalties totaling $6,000 were imposed on attorneys personally. They were also required to repay fees of $23,495.90, and some of their pleadings were stricken from the record with the requirement to refile verified pleadings.21 In addition to monetary penalties, Quebec courts have already identified a number of problematic situations related to the use of AI, such as: The Régie du bâtiment du Québec had to examine a 191-page brief containing numerous non-existent references. The author finally admitted to having used ChatGPT to formulate them. The commissioner underscored the resulting work overload and the need to regulate the use of AI before the RBQ.22 In a commercial case, the Court suspected hallucinated references and dismissed them, ruling on the credible evidence.23 At the Administrative Housing Tribunal (AHT), a lessor who had read translations of the C.C.Q. obtained through ChatGPT—which distorted its meaning—saw his application dismissed. However, his conduct was not found to be abusive, as his good faith was recognized.24 Two related AHT decisions noted that an agreement (a “Lease Transfer and Co-Tenancy Agreement”) had been drafted with the help of ChatGPT, but the AHT simply analyzed them as it usually does (text, context, C.C.Q. rules) and concluded that there had been a deferred lease assignment, without drawing any particular consequence from the use of AI.25 At the Court of Québec, a litigant attributed a self-incriminating formulation in his application to ChatGPT; the Court dismissed his explanation.26 In an application to have evidence set aside, the applicant claimed that he thought he was obliged to respond to investigators after having done research on Google and ChatGPT regarding his duty to cooperate with the employer just prior to the interview. The Court noted that he had been clearly informed of his right to remain silent and that he could leave or consult a lawyer. It therefore concluded that there was no real constraint and allowed the statement.27 Openness to AI with proper controls, certainly, but with a caveat These are just a few of a long and growing list of cases across Canada and the world around. Despite this trend, the decision in Specter Aviation avoids stigmatizing AI. The Court rather insisted on remaining open to AI, pointing out that it must be used with proper controls, reminding us that a technology that facilitates access to justice must be welcomed and given proper controls, not proscribed.28 Openness to AI comes with clear requirements, such as those set out in the opinion published by the Superior Court on October 24, 2023. In the notice, the Superior Court called for caution, the use of reliable sources (court websites, recognized commercial publishers, established public services) and “meaningful human control” of generated content.29 The practice guides issued by various courts all point in the same direction: We should govern the use of AI without banning it. The Federal Court requires a declaration when a filed document contains AI-generated content and insists on “human in the loop” verification.30 The Court of Appeal of Quebec,31 the Court of Québec32 and the municipal courts33 have issued similar warnings: need for caution, authoritative sources, hyperlinks to recognized databases and full responsibility of the author. Nowhere is AI banned—all make its use conditional on verification and traceability. Some clues suggest that the judiciary itself is using artificial intelligence. In the Small Claims Division, on at least two occasions, a judge attached English translations generated by ChatGPT as a courtesy, specifying that they had no legal value and that the French version prevailed.34 In family law, a Superior Court decision in a family matter clearly used a Statistics Canada link identified by an AI tool (the URL includes “utm_source=chatgpt.com”), but the reasoning remains rooted in primary sources and case law: The AI was used as a search tool, not to provide a legal basis.35 A decision handed down on September 3, 2025, by the Commission d’accès à l’information is a particularly good illustration of openness with proper controls. In Breton c. MSSS,36 the court allowed exhibits containing content generated by Gemini and Copilot, because they were corroborated by relevant, primary sources that had already been filed (Journal des débats, newspaper excerpts, official websites). Despite art. 2857 C.C.Q. and the flexibility of administrative law, the Court reiterated that AI content is admissible if, and only if, it is verified, traceable and supported by official sources. AI that aims to please us and that we want to believe Two constants emerge from the sanctioned cases: excessive confidence in the AI’s reliability and underestimated risk of hallucination. In the United States, in Mata v. Avianca,37 the lawyers claimed that they believed that the tool could not invent cases. In Canada, in Hussein v. Canada,38 the plaintiff’s lawyer claims to have relied on an AI service in good faith, without fully realizing that it was necessary to check references. In Australia, in JNE24 v. Minister for Immigration and Citizenship,39 the court reported an over-reliance on tools (Claude, Copilot) and insufficient verification. In Quebec, the AHT found that a lessor had been misled by the use of artificial intelligence,40 while at the Administrative Labour Tribunal (ALT), ChatGPT-generated answers deemed to be approximately 92% accurate were used.41 These examples describe a generalized trust bias that is particularly risky for those who represent themselves: AI is perceived as a reliable way to gain speed, but in reality, it requires greater human control. Large language models are optimized to produce plausible and engaging responses; but, without proper controls, they tend to confirm user expectations rather than pointing out their own limitations.42 A notice published last April by OpenAI concerning an update that made its model “overly supportive” testifies to the underlying complexity of striking the right balance between engagement and preciseness.43 This makes it easier to understand how a quarrelsome litigant may have persuaded himself, based on an AI response, that he was entitled to personally sue a judge for judicial acts perceived as biased.44 Models trained to “please” or to keep users engaged can generate responses that, in the absence of legal contextualization, amplify erroneous or imprudent interpretations. Although AI service providers generally seek to limit their liability for the consequences of incorrect answers, the scope of such clauses is necessarily limited. When ChatGPT, Claude and Gemini apply legal principles to facts reported by a user, doesn’t the entity offering the service expose itself to the rules of public order that make such acts the exclusive prerogative of lawyers, which cannot be waived by a simple disclaimer? In Standing Buffalo Dakota First Nation v. Maurice Law, the Saskatchewan Court of Appeal reiterated that the prohibition on the practise of law applies to any “person" (including a corporation) and expressly contemplated that technological mediation would not change the analysis of what prohibited acts are.45 In Quebec, this principle is enshrined in section 128 of the Act respecting the Barreau du Québec and the Professional Code: general legal information is permitted, but individualized advice can only be provided by a lawyer. While some aberrant situations have involved lawyers, unrepresented claimants or plaintiffs appear to be the most exposed to the effects of AI. Should we focus on educating users first, or restrict certain uses? The tension between access to justice and protecting the public is quite obvious. Conclusion The Specter Aviation ruling confirms that artificial intelligence has its place in court, provided that rigorous controls are applied to it, and that it is useful when verified, but sanctionable when not. While AI offers unprecedented possibilities in terms of access to justice, combining it with public protection remains a major challenge. Despite this clear signal, containing over-reliance on tools designed to be engaging and supportive, and that claim to have an answer to everything, will remain a challenge for years to come. Specter Aviation Limited c. Laprade, 2025 QCCS 3521, online: https://canlii.ca/t/kfp2c Id., paras. [35], [53] Id. para. [43] Id. para. [60] Chicoine c. Vessia, 2023 QCCA 582, https://canlii.ca/t/jx19q, para. [20]; Gagnon c. Audi Canada inc., 2018 QCCS 3128, https://canlii.ca/t/ht3cb, paras. [43]–[48]; Layla Jet Ltd. c. Acass Canada Ltd., 2020 QCCS 667, https://canlii.ca/t/j5nt8, paras. [19]–[26]. Code of Civil Procedure, CQLR, c. C-25.01, arts. 339–341. Chicoine c. Vessia, supra. note 5, paras. [20]–[21]; Constellation Brands US Operations c. Société de vin internationale ltée, 2019 QCCS 3610, https://canlii.ca/t/j251v, paras. [47]–[52]; Webb Electronics Inc. c. RRF Industries Inc., 2023 QCCS 3716, https://canlii.ca/t/k0fq8, paras. [39]–[48]. 9401-0428 Québec inc. c. 9414-8442 Québec inc., 2025 QCCA 1030, https://canlii.ca/t/kdz4h, paras. [82]–[87]; Biron c. 150 Marchand Holdings inc., 2020 QCCA 1537, https://canlii.ca/t/jbnj2, para. [100]; Groupe manufacturier d’ascenseurs Global Tardif inc. c. Société de transport de Montréal, 2023 QCCS 1403, https://canlii.ca/t/jx042, para. [26]. Groupe manufacturier d’ascenseurs Global Tardif inc. c. Société de transport de Montréal, supra. note 8, paras. [58]–[61] ($100,000 to Global Tardif, $60,000 to Intact Assurance, $40,000 to Fujitec, all as legal costs awarded under art. 342 C.C.P.); see also $20,000 for an application for an amendment made on the 6th day of a trial, forcing a continuance: Paradis c. Dupras Ledoux inc., 2024 QCCS 3266, https://canlii.ca/t/k6q26, paras. [154]–[171]; Webb Electronics Inc. c. RRF Industries Inc., supra. note 7. Layla Jet Ltd. c. Acass Canada Ltd, supra note 5, paras. [23]–[28]. Électro-peintres du Québec inc. c. 2744-3563 Québec inc., 2023 QCCS 1819, https://canlii.ca/t/jxfn0, paras. [18]–[22], [35]–[38]; see also Constant c. Larouche, 2020 QCCS 2963, https://canlii.ca/t/j9rwt, paras. [37]–[40] (repeated delays in adhering to undertakings despite an order, sanction: $5,000). Constellation Brands US Operations c. Société de vin internationale ltée, supra. note 7, paras. [39]–[43], [47]–[52]; see also AE Services et technologies inc. c. Foraction inc. (Ville de Sainte-Catherine), 2024 QCCS 242, https://canlii.ca/t/k2jvm (repeated delays in transmitting promised documentation and breach of an undertaking before the court; compensation of $3,000). Gagnon c. SkiBromont.com, 2024 QCCS 3246, https://canlii.ca/t/k6mzz, paras. [29]–[37], [41]. J.R.V. v. N.L.V., 2025 BCSC 1137, https://canlii.ca/t/kcsnc, paras. [51]–[55]. Hussein v. Canada (IRCC), 2025 FC 1138, https://canlii.ca/t/kctz0, paras. [15]–[17], applying Kuehne + Nagel Inc. v. Harman Inc, 2021 FC 26, https://canlii.ca/t/jd4j6, paras. [52]–[55] (reiterating the principles of Young v. Young and the two-step test: (1) conduct causing costs to be incurred; (2) discretionary decision to impose costs personally). AQ v. BW, 2025 BCCRT 907, https://canlii.ca/t/kd08x, paras. [15]–[16], [38]–[40]. Lloyd's Register Canada Ltd. v. Choi, 2025 FC 1233, https://canlii.ca/t/kd4w2 Mata v. Avianca, Inc, No. 22-cv-1461 (PKC) (S.D.N.Y. June 22, 2023) (sanctions order), online: Justia https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/ Tribunale di Latina (giud. Valentina Avarello), sentenza 23 septembre 2025, Atto redatto con intelligenza artificiale a stampone, con scarsa qualità e mancanza di pertinenza: sì alla condanna ex art. 96 c.p.c., La Nuova Procedura Civile (september 29, 2025), online: https://www.lanuovaproceduracivile.com/atto-redatto-con-intelligenza-artificiale-a-stampone-con-scarsa-qualita-e-mancanza-di-pertinenza-si-alla-condanna-ex-art-96-c-p-c-dice-tribunale-di-latina/ Australia, Federal Circuit and Family Court of Australia (Division 2), JNE24 v. Minister for Immigration and Citizenship, [2025] FedCFamC2G 1314 (August 15, 2025), Gerrard J, online: AustLII https://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/FedCFamC2G/2025/1314.html United States, District Court for the Eastern District of Oklahoma, Mattox v. Product Innovations Research, LLC d/b/a Sunevolutions; Cosway Company, Inc.; and John Does 1–3, No. 6:24-cv-235-JAR, Order (October 22, 2025), online: Eastern District of Oklahoma https://websitedc.s3.amazonaws.com/documents/Mattox_v._Product_Innovations_Research_USA_22_October_2025.pdf Régie du bâtiment du Québec c. 9308-2469 Québec inc. (Éco résidentiel), 2025 QCRBQ 86, online: https://canlii.ca/t/kfdfg, paras. [159]–[167]. Blinds to Go Inc. c. Blachley, 2025 QCCS 3190, online: https://canlii.ca/t/kf963 para. [57] and n. 22. Lozano González c. Roberge, 2025 QCTAL 15786, online: https://canlii.ca/t/kc2w9 paras. [7], [17]–[19]. Marna c. BKS Properties Ltd, 2025 QCTAL 34103, online: https://canlii.ca/t/kfq8n paras. [18], [21]–[25]; Campbell c. Marna, 2025 QCTAL 34105, online: https://canlii.ca/t/kfq81 paras. [18], [21]–[25]. Morrissette c. R., 2023 QCCQ 12018, online: https://canlii.ca/t/k3x5j para. [43]. Léonard c. Agence du revenu du Québec, 2025 QCCQ 2599, online: https://canlii.ca/t/kcxsb paras. [58]–[64]. Specter Aviation Limited c. Laprade, supra. note 1, para. [46]. Superior Court of Quebec, “Notice to Profession and Public – Integrity of Court Submissions When Using Large Language Models,” October 24, 2023, online: https://coursuperieureduquebec.ca/fileadmin/cour-superieure/Districts_judiciaires/Division_Montreal/Communiques/Avis_a_la_Communite_juridique-Utilisation_intelligence_artificielle_EN_October_24_2023.pdf Federal Court, “Notice to the Parties and the Profession – The Use of Artificial Intelligence in Court Proceedings,” December 20, 2023, online: https://www.fct-cf.ca/Content/assets/pdf/base/2023-12-20-notice-use-of-ai-in-court-proceedings.pdf; Federal Court, Update – The Use of Artificial Intelligence in Court Proceedings, May 7, 2024, online: https://www.fct-cf.ca/Content/assets/pdf/base/FC-Updated-AI-Notice-EN.pdf Court of Appeal of Quebec, “Notice Respecting the Use of Artificial Intelligence Before the Court of Appeal”, August 8, 2024, online: https://courdappelduquebec.ca/fileadmin/dossiers_civils/avis_et_formulaires/eng/avis_utilisation_intelligence_articielle_ENG.pdf Court of Québec, “Notice to the legal community and the public – Maintaining the integrity of submissions before the Court when using large language models,” January 26, 2024, online: https://courduquebec.ca/fileadmin/cour-du-quebec/centre-de-documentation/toutes-les-chambres/en/NoticeIntegriteObservationsCQ_LLM_en.pdf Cours municipales du Québec, Avis à la profession et au public – Maintenir l’intégrité des observations à la Cour lors de l’utilisation de grands modèles de langage, December 18, 2023, online: https://coursmunicipales.ca/fileadmin/cours_municipales_du_quebec/pdf/Document_d_information/CoursMun_AvisIntegriteObservations.pdf Bricault c. Rize Bikes Inc., 2024 QCCQ 609, online: https://canlii.ca/t/k3lcd n. 1; Brett c. 9187-7654 Québec inc, 2023 QCCQ 8520, online : https://canlii.ca/t/k1dpr, n. 1. Droit de la famille – 251297, 2025 QCCS 3187, online: https://canlii.ca/t/kf96f paras. [138]–[141]. Breton c. Ministère de la Santé et des Services sociaux, 2025 QCCAI 280, online: https://canlii.ca/t/kftlz, paras. [24]–[26], [31]. Mata v. Avianca, Inc, supra note 18. Hussein v. Canada (IRCC), 2025 FC 1138, supra note 15, paras. [15]–[17]. JNE24 v. Minister for Immigration and Citizenship, supra note 20. Lozano González c. Roberge, supra note 24, para. [17]. Pâtisseries Jessica inc. et Chen, 2024 QCTAT 1519, online: https://canlii.ca/t/k4f96, paras. [34]–[36]. See Emilio Ferrara, “Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models” (2023), SSRN 4627814, online: https://doi.org/10.2139/ssrn.4627814; Isabel O. Gallegos et al, “Bias and Fairness in Large Language Models: A Survey” (2024) 50:3 Computational Linguistics 1097, doi: 10.1162/coli_a_00524. See OpenAI, “Sycophancy in GPT-4o: what happened and what we're doing about it,” April 29, 2025, online: https://openai.com/research/sycophancy-in-gpt-4o; see also “Expanding on what we missed with sycophancy,” May 2, 2025, online: https://openai.com/index/expanding-on-sycophancy/ [44]Verreault c. Gagnon, 2023 QCCS 4922, online: https://canlii.ca/t/k243v, paras. [16], [28]. Standing Buffalo Dakota First Nation v. Maurice Law Barristers and Solicitors (Ron S. Maurice Professional Corporation), 2024 SKCA 14, online: https://canlii.ca/t/k2wn9 paras. [37]–[40], [88]–[103].