Artificial Intelligence

Overview

Take the AI knowledge test!

Lavery Legal Lab on Artificial Intelligence (L3IA)

 

We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations.

For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3IA) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries.

"As soon as a company knows what it wants, tools exist, it must make the best use of them, and our Lab is there to advise it in this regard. "

 

The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law.

Discover our lexicon which demystifies the most commonly used terms in artificial intelligence:

Lexicon on Artificial IntelligenceClick here to learn more

 

L3IA attendance at conferences and events:

L3IA conference artificial intelligence montreal international mila may 2019

 

Stay informed of the latest news from our L3IA Laboratory:

twitter @LaveryAvocats Linkedin @lavery-avocats Facebook @Laveryavocats Instagram @laveryavocats
  1. Smart product liability: issues and challenges

    Introduction In 2023, where do we stand in terms of liability where smart products are concerned? The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.  These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being. For the purposes of our analysis, the term “smart products” refers to products that have: Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control. Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings. Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2 These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”). Applying general law rules of liability to smart products Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law. The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products. There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4  But where does human accountability come in? Lack of transparency in AI and product liability When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5 The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8 Manufacturers have two means to absolve themselves from liability:9 A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge. This last means is specifically aimed at the risks inherent to technological innovation.10 That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability. Challenges in apportioning liability and human accountability There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor. In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability. Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.” There are two types of custodians of smart products: The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody); The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13. Either one of these custodians could be held liable should it contribute to the harm through its own fault. As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex. C-27: canadian bill on artificial intelligence Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act. If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15 Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights. In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17 Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18 Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”. Oversight: Providing Systems over which human oversight can be exercised. Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes. Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm. Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems. Robustness: Ensuring that Systems operate as intended. To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19 Conclusion Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries. That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country. Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu). GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq. Consumer Protection Act, CQLR c. P-40.1, s. 38. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72. Brousseau, para. 100. Brousseau, para. 102. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry. Bill C-27, summary and s. 5(1). The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca. Bill C-27, s. 39(a). AIDA, Companion document Bill C-27, s. 8. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.

    Read more
  2. Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  3. SOCAN Decision: Online music distributors must only pay a single royalty fee

    In Society of Composers, Authors and Music Publishers of Canada v. Entertainment Software Association1 (the “SOCAN Decision”), the Supreme Court of Canada ruled on the obligation to pay a royalty for making a work available to the public on a server, where it can later be streamed or downloaded. At the same time, it clarified the applicable standard of review for appeals where administrative bodies and courts share concurrent first instance jurisdiction and revisited the purpose of the Copyright Act2and its interpretation in light of the WIPO Copyright Treaty3. The Supreme Court also took the opportunity to reiterate the importance of the principle of technological neutrality in the application and interpretation of the Copyright Act. This reminder can also be applied to other artistic mediums and is very timely in a context where the digital visual arts market is experiencing a significant boom with the production and sale of non-fungible tokens (“NFTs”). In 2012, Canadian legislators amended the Copyright Act by adopting the Copyright Modernization Act4. These amendments incorporate Canada’s obligations under the Treaty into Canadian law by harmonizing the legal framework of Canada’s copyright laws with international rules on new and emerging technologies. The CMA introduced three sections related to “making [a work] available,” including section 2.4(1.1) of the CMA. This section applies to original works and clarifies section 3(1)(f), which gives authors the exclusive right to “communicate a work  to the public by telecommunication”: 2.4(1.1) Copyright Act. “For the purposes of this Act, communication of a work or other subject-matter to the public by telecommunication includes making it available to the public by telecommunication in a way that allows a member of the public to have access to it from a place and at a time individually chosen by that member of the public.” Before the CMA came into force, the Supreme Court also found that downloading a musical work from the Internet was not a communication by telecommunication within the meaning of section 3(1)(f) of the CMA5, while streaming was covered by this section.6 Following the coming into force of the CMA, the Copyright Board of Canada (the “Board”) received submissions regarding the application of section 2.4(1.1) of the Copyright Act. The Society of Composers, Authors and Music Publishers of Canada (“SOCAN”) argued, among other things, that section 2.42.4(1.1) of the Copyright Act required users to pay royalties when a work was published on the Internet, making no distinction between downloading, streaming and cases where works are published but never transmitted. The consequence of SOCAN’s position was that a royalty had to be paid each time a work was made available to the public, whether it was downloaded or streamed. For each download, a reproduction royalty also had to be paid, while for each stream, an additional performance royalty had to be paid. Judicial history The Board’s Decision7 The Board accepted SOCAN’s interpretation that making a work available to the public is a “communication”. According to this interpretation, two royalties are due when a work is published online. Firstly,  when the work is made available to the public online, and secondly, when it is streamed or downloaded. The Board’s Decision was largely based on its interpretation of Section 8 of the Treaty, according to which the act of making a work available requires separate protection by Member States and constitutes a separately compensable activity. Federal Court of Appeal’s Decision8 Entertainment Software Association, Apple Inc. and their Canadian subsidiaries (the “Broadcasters”) appealed the Board’s Decision before the Federal Court of Appeal (“FCA”). Relying on the reasonableness standard, the FCA overturned the Board’s Decision, affirming that a royalty is due only when the work is made available to the public on a server, not when a work is later streamed. The FCA also highlighted the uncertainty surrounding the applicable review standard in appeals following Vavilov9 in cases where administrative bodies and courts share concurrent first instance jurisdiction. SOCAN Decision The Supreme Court dismissed SOCAN’s appeal seeking the reinstatement of the Board’s Decision. Appellate standards of review The Supreme Court recognized that there are rare and exceptional circumstances that create a sixth category of issues to which the standard of correctness applies, namely concurrent first instance jurisdiction between courts and administrative bodies. Does section 2.4(1.1) of the Copyright Act entitle the holder of a copyright to the payment of a second royalty for each download or stream after the publication of a work on a server, making it publicly accessible? The copyright interests provided by section 3(1) of the Copyright Act The Supreme Court began its analysis by considering the three copyright interests protected by the Copyright Act, or in other words, namely the rights provided for in section 3(1): to produce or reproduce a work in any material form whatsoever; to perform the work in public; to publish an unpublished work. These three copyright interestsare distinct and a single activity can only engaged one of them. For example, the performance of a work is considered impermanent, allowing the author to retain greater control over their work than reproduction. Thus, “when an activity allows a user to experience a work for a limited period of time, the author’s performance right is engaged. A reproduction, by contrast, gives a user a durable copy of a work”.10 The Supreme Court also emphasized that an activity not involving one of the three copyright interests under section 3(1) of the Copyright Act or the author’s moral rights is not protected by the Copyright Act. Accordingly, no royalties should be paid in connection with such an activity. The Court reiterated its previous view that downloading a work and streaming a work are distinct protected activities, more precisely  downloading is considered reproduction, while streaming is considered performance. It also pointed out that downloading is not a communication under section 3(1)(f) of the Copyright Act, and that making a work available on a server is not a compensable activity distinct from the three copyright interests.11 Purpose of the Copyright Act and the principle of technological neutrality The Supreme Court criticized the Board’s Decision, opining that it violates the principle of technological neutrality, in particular by requiring users to pay additional fees to access online works. The purpose of the CMA was to “ensure that [the Copyright Act] remains technologically neutral”12 and thereby show, at the same time, Canada’s adherence to the principle of technological neutrality. The principle of technological neutrality is further explained by the Supreme Court: [63] The principle of technological neutrality holds that, absent parliamentary intent to the contrary, the Copyright Act should not be interpreted in a way that either favours or discriminates against any form of technology: CBC, at para. 66. Distributing functionally equivalent works through old or new technology should engage the same copyright interests: Society of Composers, Authors and Music Publishers of Canada v. Bell Canada, 2012 SCC 36, [2012] 2 S.C.R. 326, at para. 43; CBC, at para. 72. For example, purchasing an album online should engage the same copyright interests, and attract the same quantum of royalties, as purchasing an album in a bricks-and-mortar store since these methods of purchasing the copyrighted works are functionally equivalent. What matters is what the user receives, not how the user receives it: ESA, at paras. 5-6 and 9; Rogers, at para. 29. In its summary to the CMA, which precedes the preamble, Parliament signalled its support for technological neutrality, by stating that the amendments were intended to “ensure that [the Copyright Act] remains technologically neutral”. According to the Supreme Court, the principle of technological neutrality must be observed in the light of the purpose of the Copyright Act, which does not exist solely for the protection of authors’ rights. Rather, the Act seeks to strike a balance between the rights of users and the rights of authors by facilitating the dissemination of artistic and intellectual works aiming to enrich society and inspire other creators. As a result, “[w]hat matters is what the user receives, not how the user receives it.”13 Thus, whether the reproduction or dissemination of the work takes place online or offline, the same copyright applies and leads to the same royalties. What is the correct interpretation of section 2.4(1.1) of the Copyright Act? Section 8 of the Treaty The Supreme Court reiterated that international treaties are relevant at the context stage of the statutory interpretation exercise and they can be considered without textual ambiguity in the statute.14 Moreover, wherethe text permits, it must be interpreted so as to comply with Canada’s treaty obligations, in accordance with the presumption of conformity, which states that a treaty cannot override clear legislative intent.15 The Court concluded that section 2.4(1.1) of the Copyright Act was intended to implement Canada’s obligations under Section 8 of the Treaty, and that the Treaty must therefore be taken into account in interpreting section 2.4(1.1) of the Act. Although Section 8 of the Treaty gives authors the right to control making works available to the public, it does not create a new and protected “making available” right that would be separately compensable. In such cases, there are no “distinct communications” or in other words, “distinct performances”.16 Section 8 of the Treaty creates only two obligations: “protect on demand transmissions; and give authors the right to control when and how their work is made available for downloading or streaming.”17 Canada has the freedom to choose how these two objectives are implemented in the Copyright Act, either through the right of distribution, the right of communication to the public, the combination of these rights, or a new right.18 The Supreme Court concluded that the Copyright Act gives effect to the obligations arising from Section 8 of the Treaty through a combination of the performance, reproduction, and authorization rights provided for in section 3(1) of the Copyright Act, and by respecting the principle of technological neutrality.19 Which interpretation of section 2.4(1.1) of the Copyright Act should be followed? The purpose of section 2.4(1.1) of the Copyright Act is to clarify the communication right in section 3(1)(f) of the Copyright Act by emphasizing its application to on-demand streaming. A single on-demand stream to a member of the public thus constitutes a “communication to the public” within the meaning of section 3(1)(f) of the Copyright Act.20 Section 2.4(1.1) of the Copyright Act states that a work is performed as soon as it is made available for on-demand streaming.21 Therefore, streaming is only a continuation of the performance of the work, which starts when the work is made available. Only one royalty should be collected in connection with this right: [100] This interpretation does not require treating the act of making the work available as a separate performance from the work’s subsequent transmission as a stream. The work is performed as soon as it is made available for on-demand streaming. At this point, a royalty is payable. If a user later experiences this performance by streaming the work, they are experiencing an already ongoing performance, not starting a new one. No separate royalty is payable at that point. The “act of ‘communication to the public’ in the form of ‘making available’ is completed by merely making a work available for on?demand transmission. If then the work is actually transmitted in that way, it does not mean that two acts are carried out: ‘making available’ and ‘communication to the public’. The entire act thus carried out will be regarded as communication to the public”: Ficsor, at p. 508. In other words, the making available of a stream and a stream by a user are both protected as a single performance — a single communication to the public. In summary, the Supreme Court stated and clarified the following in the SOCAN Decision: Section 3(1)(f) of the Copyright Act does not cover download of a work. Making a work available on a server and streaming the work both involve the same copyright interest to the performance of the work. As a result, only one royalty must be paid when a work is uploaded to a server and streamed. This interpretation of section 2.4(1.1) of the Copyright Act is consistent with Canada’s international obligations for copyright protection. In cases of concurrent first instance jurisdiction between courts and administrative bodies, the standard of correctness should be applied. As artificial intelligence works of art increase in amount and as a new market for digital visual art emerges, driven by the public’s attraction for the NFT exchanges, the principle of technological neutrality is becoming crucial for understanding the copyrights attached to these new digital objects and their related transactions. Fortunately, the issues surrounding digital music and its sharing and streaming have paved the way for rethinking copyright in a digital context. It should also be noted that in decentralized and unregulated digital NFT markets, intellectual property rights currently provide the only framework that is really respected by some market platforms and may call for some degree of intervention on the part of the market platforms’ owners. 2022 SCC 30. R.S.C. (1985), c. C-42 (hereinafter the “Copyright Act”). Can. T.S. 2014 No. 20, (hereinafter the “Treaty”). S.C. 2012, c. 20 (hereinafter the “CMA”). Entertainment Software Association v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 34. Rogers Communications Inc. v. Society of Composers, Authors and Music Publishers of Canada, 2012 SCC 35. Copyright Board of Canada, 2017 CanLII 152886 (hereinafter the “Board’s Decision”). Federal Court of Appeal, 2020 FCA 100 (hereinafter the “FCA’s Decision”). Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65. SOCAN Decision, par. 56. Ibid, para. 59. CMA, Preamble. SOCAN Decision, para. 70, emphasis added by the SCC. Ibid, paras. 44-45. Ibid, paras. 46-48. Ibid, paras. 74-75. Ibid, para. 88. Ibid, para. 90. Ibid, paras. 101 and 108. Ibid, paras. 91-94. Ibid, paras. 95 and 99-100.

    Read more
  4. Crypto asset works of art and non-fungible token (NFT) investments: Be careful!

    On March 11, 2021, Christie’s auction house made a landmark sale by auctioning off an entirely digital artwork by the artist Beeple, a $69 million transaction in Ether, a cryptocurrency.1 In doing so, the famous auction house put non-fungible tokens (“NFT”), the product of a decentralized blockchain, in the spotlight. While many extol the benefits of such crypto asset technology, there are also significant risks associated with it,2 requiring greater vigilance when dealing with any investment or transaction involving NFTs. What is an NFT? The distinction between fungible and non-fungible assets is not new. Prior to the invention of blockchain, the distinction was used to differentiate assets based on their availability, fungible assets being highly available and non-fungible assets, scarce. Thus, a fungible asset can easily be replaced by an equivalent asset with the same market value. The best example is money, whether it be coins, notes, deposit money or digital money, such as Bitcoin. On the contrary, a non-fungible asset is unique and irreplaceable. As such, works of art are non-fungible assets in that they are either unique or very few copies of them exist. Their value is a result of their authenticity and provenance, among other things. NFTs are crypto assets associated with blockchain technology that replicate the phenomenon of scarcity. Each NFT is associated with a unique identifier to ensure traceability. In addition to the art market, online, NFTs have been associated with the collection of virtual items, such as sports cards and other memorabilia and collectibles, including the first tweet ever written.3 NFTs can also be associated with tangible goods, in which case they can be used to track exchanges and transactions related to such goods. In 2019, Ernst & Young developed a system of unique digital identifiers for a client to track and manage its collection of fine wines.4 Many projects rely on cryptocurrencies, such as Ether, to create NFTs. This type of cryptocurrency is programmable and allows for metadata to be embedded through a code that becomes the key to tracking assets, such as works of art or other valuables. What are the risks associated with NFTs? Although many praise the benefits of NFTs, in particular the increased traceability of the origin of goods exchanged through digital transactions, it has become clear that the speculative bubble of the past few weeks has, contrary to expectations, resulted in new opportunities for fraud and abuse of the rights associated with works exchanged online. An unregulated market? While there is currently no legislative framework that specifically regulates crypto asset transactions, NFT buyers and sellers are still subject to the laws and regulations currently governing the distribution of financial products and services5, the securities laws6, the Money-Services Business Act7 and the tax laws8. Is an NFT a security? In January 2020, the Canadian Securities Administrators (CSA) identified crypto asset “commodities” as assets that may be subject to securities laws and regulations. Thus, platforms that manage and host NFTs on behalf of their users engage in activities that are governed by the laws that apply to securities trading, as long as they retain possession or control of NFTs. On the contrary, a platform will not be subject to regulatory oversight if: “the underlying crypto asset itself is not a security or derivative; and the contract or instrument for the purchase, sale or delivery of a crypto asset results in an obligation to make immediate delivery of the crypto asset, and is settled by the immediate delivery of the crypto asset to the Platform’s user according to the Platform’s typical commercial practice.”9 Fraud10 NFTs don’t protect collectors and investors from fraud and theft. Among the documented risks, there are fake websites robbing investors of their cryptocurrencies, thefts and/or disappearances of NFTs hosted on platforms, and copyright and trademark infringement. Theft and disappearance of NFT assets As some Nifty Gateway users unfortunately learned the hard way in late March, crypto asset platforms are not inherently immune to the hacking and theft of personal data associated with accounts, including credit card information. With the hacking of many Nifty Gateway accounts, some users have been robbed of their entire NFT collection.11 NFTs are designed to prevent a transaction that has been concluded between two parties from being reversed. Once the transfer of the NFT to another account has been initiated, the user, or a third party such as a bank, cannot reverse the transaction. Cybercrime targeting crypto assets is not in its infancy—similar schemes have been seen in thefts of the cryptocurrency Ether. Copyright infringement and theft of artwork images The use of NFTs makes it possible to identify three types of problems that could lead to property right and copyright infringement: It is possible to create more than one NFT for the same work of art or collectible, thus generating separate chains of ownership. NFTs can be created for works that already exist and are not owned by the person marketing them. There are no mechanisms to verify copyrights and property rights associated with transacted NFTs. This creates false chains of ownership. The authenticity of the original depends too heavily on URLs that are vulnerable and could eventually disappear.12 For the time being, these problems have yet to be addressed by both the various platforms and the other parties involved in NFT transactions, including art galleries. Thus, the risks are borne solely by the buyer. This situation calls for increased accountability for platforms and others involved in transactions. The authenticity of the NFTs traded must be verified, as should the identity of the parties involved in a transaction. Money laundering and proceeds of crime In September 2020, the Financial Action Task Force (FATF)13 published a report regarding the main risks associated with virtual assets and with platforms offering services relating to such virtual assets. In particular, FATF pointed out that money laundering and other types of illicit activity financing are facilitated by virtual assets, which are more conducive to rapid cross-border transactions in decentralized markets that are not regulated by national authorities;14 that is, the online marketplaces where cryptocurrencies and decentralized assets are traded on blockchains. Among other things, FATF pointed to the anonymity of the parties to transactions as a factor that increases risk. Considering all the risks associated with NFTs, we recommend taking the utmost precaution before investing in this category of crypto assets. In fact, on April 23, 2021, the Autorité des marchés financiers reiterated its warning about the “inordinately high risks” associated with investments involving cryptocurrencies and crypto assets.15 The best practices to implement prior to any transactions are: obtaining evidence identifying the party you are transacting with, if possible, safeguarding your crypto assets yourself, and checking with regulatory bodies to ensure that the platform on which the exchange will take place is compliant with applicable laws and regulations regarding the issuance of securities and derivatives. https://onlineonly.christies.com/s/beeple-first-5000-days/lots/2020 On April 23, 2021, the Autorité des marchés financiers reiterated its warnings about issuing tokens and investing in crypto assets. https://lautorite.qc.ca/en/general-public/media-centre/news/fiche-dactualites/amf-warns-about-the-risks-associated-with-crypto-assets https://www.reuters.com/article/us-twitter-dorsey-nft-idUSKBN2BE2KJ https://www.ey.com/en_gl/news/2019/08/ey-helps-wiv-technology-accelerate-fine-wine-investing-with-blockchain Act respecting the regulation of the financial sector, CQLR, c. E-6.1; Act respecting the distribution of financial products and services, CQLR, c. D-9.2. Securities Act, CQLR., c. V-1.1; see also the regulatory sandbox produced by the CSA: https://www.securities-administrators.ca/industry_resources.aspx?ID=1715&LangType=1033 CQLR, c. E-12.000001 https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html; https://www.revenuquebec.ca/en/fair-for-all/helping-you-meet-your-obligations/virtual-currency/reporting-virtual-currency-income/ https://lautorite.qc.ca/fileadmin/lautorite/reglementation/valeurs-mobilieres/0-avis-acvm-staff/2020/2020janv16-21-327-avis-acvm-en.pdf https://www.telegraph.co.uk/technology/2021/03/15/crypto-art-market-infiltrated-fakes-thieves-scammers/ https://www.coindesk.com/nifty-gateway-nft-hack-lessons; https://news.artnet.com/opinion/nifty-gateway-nft-hack-gray-market-1953549 https://blog.malwarebytes.com/explained/2021/03/nfts-explained-daylight-robbery-on-the-blockchain/ FATF is an independent international body that assesses the risks associated with money laundering and the financing of both terrorist activities and the proliferation of weapons of mass destruction. https://www.fatf-gafi.org/media/fatf/documents/recommendations/Virtual-Assets-Red-Flag-Indicators.pdf, p. 1. https://lautorite.qc.ca/en/general-public/media-centre/news/fiche-dactualites/amf-warns-about-the-risks-associated-with-crypto-assets

    Read more
  1. Lavery and the Fondation Montréal inc. launch a $15,000 grant for artificial intelligence

    Lavery and Fondation Montréal inc. are pleased to announce the creation of the Lavery AI Grant offered to start-ups in the field of artificial intelligence (AI). Valued at $15,000, grant winners will also have access to the full range of services provided by Fondation Montréal inc., as well as legal coaching by Lavery, tailored to the needs of young businesses in the artificial intelligence industry. The Lavery AI Grant is an annual grant and will be awarded each spring by Fondation Montréal inc. and Lavery to the start-up that has made the biggest impact in the area of artificial intelligence and that demonstrates great potential for growth. “With each passing day, Montréal is becoming the world city for artificial intelligence and six months ago, Lavery created an AI legal laboratory to analyze and predict the impact of AI in specific areas of the law, from intellectual property to the protection of personal information, including corporate governance and every aspect of business law. Our intention in creating this grant was to resolutely propel start-ups working in this activity sector and offer them legal guidance using the knowledge we developed in our laboratory,” stated Guillaume Lavoie, a partner and head of the Lavery CAPITAL group. “Young entrepreneurs are increasingly incorporating artificial intelligence into the core of their business model. We are happy that we can offer, in addition to the grant, services specific to this industry, thereby strengthening the role of Fondation Montréal inc. as a super connector with the business community,” remarked Liette Lamonde, Executive Director of Fondation Montréal inc.  Applicants can submit an application starting today through the Fondation Montréal inc. website (http://www.montrealinc.ca/en/lavery-ai-grant)

    Read more