AI: Where Do We Go From Here?
In March 2017 – more than 3,000 days ago – Lavery established its Artificial Intelligence Legal Lab to study and, above all, anticipate developments in artificial intelligence. Quite innovative at the time, the goal of Lab was to position itself ahead of the legal complexities that artificial intelligence would bring for our clients. The number of developments in the field of AI since that date is astonishing. On May 19, 2025, Alexandre Sirois wondered in an article in La Presse[1] whether Montreal was still a leading hub for AI. He notably raised the question in the context of major AI investments made in recent years in other jurisdictions, citing, for instance, France, Germany, and Singapore. This timely question prompts reflection – have the massive research and development efforts and investments made in Quebec and Canada effectively translated into commercial advancements for the benefit of Canadian businesses, institutions, and customers? In other words, are we successfully transitioning from R&D in the field of AI to the production, commercialization, and industrialization of products and services in Canada that are highly distinctive, innovative, or competitive on the international scene? Does the legislative framework in Quebec and Canada sufficiently support technological advancements resulting from our AI investments, while also showcasing and maximizing the outcomes derived from the exceptional human talent present in our universities, research groups, institutions, and companies? As important as it is to protect privacy, personal information, data, and the public in general in the context of AI use, it is equally important to enable our entrepreneurs, start-ups, businesses, and institutions to strategically position themselves advantageously in this field – potentially the deciding factor between a prosperous society and one lagging behind others. At the other end of the spectrum, in The Technological Republic: Hard Power, Soft Belief, and the Future of the West, Alexander C. Karp and Nicholas W. Zamiska reflect on various topics involving technology, governance, and global power dynamics. They highlight concerns about the geopolitical consequences of technological complacency, notably criticizing major technology companies (mostly based in Silicon Valley) for developing AI technology with a focus on short-term gains rather than long-term innovation. They argue that these companies prioritize trivial applications, such as social media algorithms and e-commerce platforms, which serve as distractions from addressing critical societal challenges, instead of aligning with national or global human interests. From a Canadian legal perspective, this is both fascinating and thought-provoking. Amidst the swift evolution of international commercial relations, what pivotal role will Canada, and notably its innovative entrepreneurs, businesses, institutions, cutting-edge universities, and renowned groups, play in shaping our future? Can they seize their rightful place and lead the charge in the relentless march of future developments? In this context, is regulating AI from a national perspective the strategic and logical road to follow, or could an excess of regulations stifle Canadian businesses and entrepreneurs, hindering our chances in the high-stakes AI race? The head of Google’s Deepmind, Demis Hassabis, recently stated that greater international cooperation around AI regulation was needed, although this would be difficult to achieve given today’s geopolitical context[2]. Obviously, there is fierce competition on the global stage to come out on top in AI, and as in all areas or industrial revolutions where the potential for economic and social development is extraordinary, the degree of regulation and oversight can cause some countries and companies to take the lead (sometimes at the expense of the environment or human rights). Reflection on the subject, however necessary, must not lead to inaction. And proactivity with regard to artificial intelligence must not lead to negligence or carelessness. We operate in a competitive world where the rules of engagement are far from universal. Even with the best intentions, we can unintentionally embrace technological solutions that conflict with our core values and long-term interests. Once such solutions gain a foothold, they become hard to remove. Recently, various applications have drawn attention for their data-collection practices and potential links to external entities, illustrating how swiftly popular platforms can become national debates over values, governance, and security. Even when these platforms have demonstrated links to foreign or hostile entities, they are hard to dislodge. In May 2025, after months spent pursuing a plan to convert itself into a for-profit business, OpenAI, Inc. decided to remain under the control of a non-profit organization[3]. Headquartered in California, OpenAI, Inc. aims to develop safe and beneficial artificial general intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work[4].” This decision followed a series of criticisms and legal challenges accusing OpenAI of drifting from its original mission of developing AI for the benefit of humanity. Bill C-27, known as the Digital Charter Implementation Act, 2022, was a legislative proposal in Canada aiming to overhaul federal privacy laws and introduce regulations for artificial intelligence (AI). It encompassed three primary components, including the Artificial Intelligence and Data Act (AIDA), intended to regulate the development and deployment of high-impact AI systems. This Act[5] would have required organizations to implement measures to identify, assess, and mitigate risks associated with AI, including potential harms and biases. It also proposed the establishment of an AI and Data Commissioner to support enforcement and outlined criminal penalties for the misuse of AI technologies. In addition, the Act would have established prohibitions related to the possession or use of personal information obtained illegally for designing, developing, using, or making available an AI system, as well as prohibitions against making available an AI system whose use causes serious harm to individuals. The failure to enact Bill C-27 left Canada’s federal privacy laws and AI regulations unchanged, maintaining the status quo established under PIPEDA and other general rules of civil and common law, as well as the Canadian Charter of Rights and Freedoms. This outcome has implications for Canada’s alignment with international privacy standards and its approach to AI governance. Stakeholders have expressed concerns about the adequacy of existing laws in addressing contemporary digital challenges and the potential impact on Canada’s global standing in data protection and AI innovation. In the current international context, advancements in artificial intelligence are set to be widespread in fields such as the military, healthcare, finance, aerospace, resource utilization, and, of course, law and justice. So, with AI, what direction do we take from here? We have the choice between deciding for ourselves – by strategically aligning our investments, R&D, and the efforts of our entrepreneurs – or allowing technological advancements, largely driven abroad, to determine our path forward. [1] On a posé la question pour vous | Montréal est-il encore une plaque tournante en IA ? | La Presse [2] Google Deepmind CEO Says Global AI Cooperation 'Difficult' - Barron's [3] OpenAI reverses course and says its nonprofit will continue to control its business | Financial Post [4] The OpenAI Drama: What Is AGI And Why Should You Care? [5] The Artificial Intelligence and Data Act (AIDA) – Companion document