Artificial Intelligence

Overview

Take the AI knowledge test!

Lavery Legal Lab on Artificial Intelligence (L3IA)

 

We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations.

For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3IA) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries.

"As soon as a company knows what it wants, tools exist, it must make the best use of them, and our Lab is there to advise it in this regard. "

 

The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law.

Discover our lexicon which demystifies the most commonly used terms in artificial intelligence:

Lexicon on Artificial IntelligenceClick here to learn more

 

  1. The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  2. Smart product liability: issues and challenges

    Introduction In 2023, where do we stand in terms of liability where smart products are concerned? The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.  These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being. For the purposes of our analysis, the term “smart products” refers to products that have: Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control. Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings. Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2 These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”). Applying general law rules of liability to smart products Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law. The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products. There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4  But where does human accountability come in? Lack of transparency in AI and product liability When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5 The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8 Manufacturers have two means to absolve themselves from liability:9 A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge. This last means is specifically aimed at the risks inherent to technological innovation.10 That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability. Challenges in apportioning liability and human accountability There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor. In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability. Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.” There are two types of custodians of smart products: The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody); The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13. Either one of these custodians could be held liable should it contribute to the harm through its own fault. As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex. C-27: canadian bill on artificial intelligence Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act. If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15 Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights. In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17 Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18 Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”. Oversight: Providing Systems over which human oversight can be exercised. Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes. Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm. Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems. Robustness: Ensuring that Systems operate as intended. To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19 Conclusion Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries. That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country. Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu). GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq. Consumer Protection Act, CQLR c. P-40.1, s. 38. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72. Brousseau, para. 100. Brousseau, para. 102. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry. Bill C-27, summary and s. 5(1). The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca. Bill C-27, s. 39(a). AIDA, Companion document Bill C-27, s. 8. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.

    Read more
  3. Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  4. SOCAN Decision: Online music distributors must only pay a single royalty fee

    In Society of Composers, Authors and Music Publishers of Canada v. Entertainment Software Association1 (the “SOCAN Decision”), the Supreme Court of Canada ruled on the obligation to pay a royalty for making a work available to the public on a server, where it can later be streamed or downloaded. At the same time, it clarified the applicable standard of review for appeals where administrative bodies and courts share concurrent first instance jurisdiction and revisited the purpose of the Copyright Act2and its interpretation in light of the WIPO Copyright Treaty3. The Supreme Court also took the opportunity to reiterate the importance of the principle of technological neutrality in the application and interpretation of the Copyright Act. This reminder can also be applied to other artistic mediums and is very timely in a context where the digital visual arts market is experiencing a significant boom with the production and sale of non-fungible tokens (“NFTs”). In 2012, Canadian legislators amended the Copyright Act by adopting the Copyright Modernization Act4. These amendments incorporate Canada’s obligations under the Treaty into Canadian law by harmonizing the legal framework of Canada’s copyright laws with international rules on new and emerging technologies. The CMA introduced three sections related to “making [a work] available,” including section 2.4(1.1) of the CMA. This section applies to original works and clarifies section 3(1)(f), which gives authors the exclusive right to “communicate a work  to the public by telecommunication”: 2.4(1.1) Copyright Act. “For the purposes of this Act, communication of a work or other subject-matter to the public by telecommunication includes making it available to the public by telecommunication in a way that allows a member of the public to have access to it from a place and at a time individually chosen by that member of the public.” Before the CMA came into force, the Supreme Court also found that downloading a musical work from the Internet was not a communication by telecommunication within the meaning of section 3(1)(f) of the CMA5, while streaming was covered by this section.6 Following the coming into force of the CMA, the Copyright Board of Canada (the “Board”) received submissions regarding the application of section 2.4(1.1) of the Copyright Act. The Society of Composers, Authors and Music Publishers of Canada (“SOCAN”) argued, among other things, that section 2.42.4(1.1) of the Copyright Act required users to pay royalties when a work was published on the Internet, making no distinction between downloading, streaming and cases where works are published but never transmitted. The consequence of SOCAN’s position was that a royalty had to be paid each time a work was made available to the public, whether it was downloaded or streamed. For each download, a reproduction royalty also had to be paid, while for each stream, an additional performance royalty had to be paid. Judicial history The Board’s Decision7 The Board accepted SOCAN’s interpretation that making a work available to the public is a “communication”. According to this interpretation, two royalties are due when a work is published online. Firstly,  when the work is made available to the public online, and secondly, when it is streamed or downloaded. The Board’s Decision was largely based on its interpretation of Section 8 of the Treaty, according to which the act of making a work available requires separate protection by Member States and constitutes a separately compensable activity. Federal Court of Appeal’s Decision8 Entertainment Software Association, Apple Inc. and their Canadian subsidiaries (the “Broadcasters”) appealed the Board’s Decision before the Federal Court of Appeal (“FCA”). Relying on the reasonableness standard, the FCA overturned the Board’s Decision, affirming that a royalty is due only when the work is made available to the public on a server, not when a work is later streamed. The FCA also highlighted the uncertainty surrounding the applicable review standard in appeals following Vavilov9 in cases where administrative bodies and courts share concurrent first instance jurisdiction. SOCAN Decision The Supreme Court dismissed SOCAN’s appeal seeking the reinstatement of the Board’s Decision. Appellate standards of review The Supreme Court recognized that there are rare and exceptional circumstances that create a sixth category of issues to which the standard of correctness applies, namely concurrent first instance jurisdiction between courts and administrative bodies. Does section 2.4(1.1) of the Copyright Act entitle the holder of a copyright to the payment of a second royalty for each download or stream after the publication of a work on a server, making it publicly accessible? The copyright interests provided by section 3(1) of the Copyright Act The Supreme Court began its analysis by considering the three copyright interests protected by the Copyright Act, or in other words, namely the rights provided for in section 3(1): to produce or reproduce a work in any material form whatsoever; to perform the work in public; to publish an unpublished work. These three copyright interestsare distinct and a single activity can only engaged one of them. For example, the performance of a work is considered impermanent, allowing the author to retain greater control over their work than reproduction. Thus, “when an activity allows a user to experience a work for a limited period of time, the author’s performance right is engaged. A reproduction, by contrast, gives a user a durable copy of a work”.10 The Supreme Court also emphasized that an activity not involving one of the three copyright interests under section 3(1) of the Copyright Act or the author’s moral rights is not protected by the Copyright Act. Accordingly, no royalties should be paid in connection with such an activity. The Court reiterated its previous view that downloading a work and streaming a work are distinct protected activities, more precisely  downloading is considered reproduction, while streaming is considered performance. It also pointed out that downloading is not a communication under section 3(1)(f) of the Copyright Act, and that making a work available on a server is not a compensable activity distinct from the three copyright interests.11 Purpose of the Copyright Act and the principle of technological neutrality The Supreme Court criticized the Board’s Decision, opining that it violates the principle of technological neutrality, in particular by requiring users to pay additional fees to access online works. The purpose of the CMA was to “ensure that [the Copyright Act] remains technologically neutral”12 and thereby show, at the same time, Canada’s adherence to the principle of technological neutrality. The principle of technological neutrality is further explained by the Supreme Court: [63] The principle of technological neutrality holds that, absent parliamentary intent to the contrary, the Copyright Act should not be interpreted in a way that either favours or discriminates against any form of technology: CBC, at para. 66. Distributing functionally equivalent works through old or new technology should engage the same copyright interests: Society of Composers, Authors and Music Publishers of Canada v. Bell Canada, 2012 SCC 36, [2012] 2 S.C.R. 326, at para. 43; CBC, at para. 72. For example, purchasing an album online should engage the same copyright interests, and attract the same quantum of royalties, as purchasing an album in a bricks-and-mortar store since these methods of purchasing the copyrighted works are functionally equivalent. What matters is what the user receives, not how the user receives it: ESA, at paras. 5-6 and 9; Rogers, at para. 29. In its summary to the CMA, which precedes the preamble, Parliament signalled its support for technological neutrality, by stating that the amendments were intended to “ensure that [the Copyright Act] remains technologically neutral”. According to the Supreme Court, the principle of technological neutrality must be observed in the light of the purpose of the Copyright Act, which does not exist solely for the protection of authors’ rights. Rather, the Act seeks to strike a balance between the rights of users and the rights of authors by facilitating the dissemination of artistic and intellectual works aiming to enrich society and inspire other creators. As a result, “[w]hat matters is what the user receives, not how the user receives it.”13 Thus, whether the reproduction or dissemination of the work takes place online or offline, the same copyright applies and leads to the same royalties. What is the correct interpretation of section 2.4(1.1) of the Copyright Act? Section 8 of the Treaty The Supreme Court reiterated that international treaties are relevant at the context stage of the statutory interpretation exercise and they can be considered without textual ambiguity in the statute.14 Moreover, wherethe text permits, it must be interpreted so as to comply with Canada’s treaty obligations, in accordance with the presumption of conformity, which states that a treaty cannot override clear legislative intent.15 The Court concluded that section 2.4(1.1) of the Copyright Act was intended to implement Canada’s obligations under Section 8 of the Treaty, and that the Treaty must therefore be taken into account in interpreting section 2.4(1.1) of the Act. Although Section 8 of the Treaty gives authors the right to control making works available to the public, it does not create a new and protected “making available” right that would be separately compensable. In such cases, there are no “distinct communications” or in other words, “distinct performances”.16 Section 8 of the Treaty creates only two obligations: “protect on demand transmissions; and give authors the right to control when and how their work is made available for downloading or streaming.”17 Canada has the freedom to choose how these two objectives are implemented in the Copyright Act, either through the right of distribution, the right of communication to the public, the combination of these rights, or a new right.18 The Supreme Court concluded that the Copyright Act gives effect to the obligations arising from Section 8 of the Treaty through a combination of the performance, reproduction, and authorization rights provided for in section 3(1) of the Copyright Act, and by respecting the principle of technological neutrality.19 Which interpretation of section 2.4(1.1) of the Copyright Act should be followed? The purpose of section 2.4(1.1) of the Copyright Act is to clarify the communication right in section 3(1)(f) of the Copyright Act by emphasizing its application to on-demand streaming. A single on-demand stream to a member of the public thus constitutes a “communication to the public” within the meaning of section 3(1)(f) of the Copyright Act.20 Section 2.4(1.1) of the Copyright Act states that a work is performed as soon as it is made available for on-demand streaming.21 Therefore, streaming is only a continuation of the performance of the work, which starts when the work is made available. Only one royalty should be collected in connection with this right: [100] This interpretation does not require treating the act of making the work available as a separate performance from the work’s subsequent transmission as a stream. The work is performed as soon as it is made available for on-demand streaming. At this point, a royalty is payable. If a user later experiences this performance by streaming the work, they are experiencing an already ongoing performance, not starting a new one. No separate royalty is payable at that point. The “act of ‘communication to the public’ in the form of ‘making available’ is completed by merely making a work available for on?demand transmission. If then the work is actually transmitted in that way, it does not mean that two acts are carried out: ‘making available’ and ‘communication to the public’. The entire act thus carried out will be regarded as communication to the public”: Ficsor, at p. 508. In other words, the making available of a stream and a stream by a user are both protected as a single performance — a single communication to the public. In summary, the Supreme Court stated and clarified the following in the SOCAN Decision: Section 3(1)(f) of the Copyright Act does not cover download of a work. Making a work available on a server and streaming the work both involve the same copyright interest to the performance of the work. As a result, only one royalty must be paid when a work is uploaded to a server and streamed. This interpretation of section 2.4(1.1) of the Copyright Act is consistent with Canada’s international obligations for copyright protection. In cases of concurrent first instance jurisdiction between courts and administrative bodies, the standard of correctness should be applied. As artificial intelligence works of art increase in amount and as a new market for digital visual art emerges, driven by the public’s attraction for the NFT exchanges, the principle of technological neutrality is becoming crucial for understanding the copyrights attached to these new digital objects and their related transactions. Fortunately, the issues surrounding digital music and its sharing and streaming have paved the way for rethinking copyright in a digital context. It should also be noted that in decentralized and unregulated digital NFT markets, intellectual property rights currently provide the only framework that is really respected by some market platforms and may call for some degree of intervention on the part of the market platforms’ owners. 2022 SCC 30. R.S.C. (1985), c. C-42 (hereinafter the “Copyright Act”). Can. T.S. 2014 No. 20, (hereinafter the “Treaty”). S.C. 2012, c. 20 (hereinafter the “CMA”). Entertainment Software Association v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 34. Rogers Communications Inc. v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 35. Copyright Board of Canada, 2017 CanLII 152886 (hereinafter the “Board’s Decision”). Federal Court of Appeal, 2020 FCA 100 (hereinafter the “FCA’s Decision”). Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65. SOCAN Decision, par. 56. Ibid, para. 59. CMA, Preamble. SOCAN Decision, para. 70, emphasis added by the SCC. Ibid, paras. 44-45. Ibid, paras. 46-48. Ibid, paras. 74-75. Ibid, para. 88. Ibid, para. 90. Ibid, paras. 101 and 108. Ibid, paras. 91-94. Ibid, paras. 95 and 99-100.

    Read more
  1. Lavery and the Fondation Montréal inc. launch a $15,000 grant for artificial intelligence

    Lavery and Fondation Montréal inc. are pleased to announce the creation of the Lavery AI Grant offered to start-ups in the field of artificial intelligence (AI). Valued at $15,000, grant winners will also have access to the full range of services provided by Fondation Montréal inc., as well as legal coaching by Lavery, tailored to the needs of young businesses in the artificial intelligence industry. The Lavery AI Grant is an annual grant and will be awarded each spring by Fondation Montréal inc. and Lavery to the start-up that has made the biggest impact in the area of artificial intelligence and that demonstrates great potential for growth. “With each passing day, Montréal is becoming the world city for artificial intelligence and six months ago, Lavery created an AI legal laboratory to analyze and predict the impact of AI in specific areas of the law, from intellectual property to the protection of personal information, including corporate governance and every aspect of business law. Our intention in creating this grant was to resolutely propel start-ups working in this activity sector and offer them legal guidance using the knowledge we developed in our laboratory,” stated Guillaume Lavoie, a partner and head of the Lavery CAPITAL group. “Young entrepreneurs are increasingly incorporating artificial intelligence into the core of their business model. We are happy that we can offer, in addition to the grant, services specific to this industry, thereby strengthening the role of Fondation Montréal inc. as a super connector with the business community,” remarked Liette Lamonde, Executive Director of Fondation Montréal inc.  Applicants can submit an application starting today through the Fondation Montréal inc. website (http://www.montrealinc.ca/en/lavery-ai-grant)

    Read more