Publications

Packed with valuable information, our publications help you stay in touch with the latest developments in the fields of law affecting you, whatever your sector of activity. Our professionals are committed to keeping you informed of breaking legal news through their analysis of recent judgments, amendments, laws, and regulations.

Advanced search
  • Impact of technology on the practice of law

    Technology is now a part of our day-to-day lives, and we’ve learned how to use it. But what about our judicial institutions? What impact does technology have on the administration of proof and the practice of law? The Court of Appeal provides us with some solutions (and grounds for discussion) in its recent case of Benisty v. Kloda 1 Charles Benisty (hereinafter “the appellant”) initiated an appeal in June 2009 against Samuel Kloda (hereinafter “the respondent”) as well as CIBC Wood Gundy (hereinafter “CIBC”). The appellant is claiming that the respondent committed errors in fulfilling the mandate that he had entrusted to him with regard to certain financial transactions completed between November 2004 and September 2008. The respondent was a financial consultant and Executive Vice-President of the Montréal branch of the CIBC. Prior to the initiation of legal proceedings, the appellant recorded some of their telephone conversations, as part of discussions that took place between the respondent and himself, without the knowledge of the respondent. He states that he acted in this manner because he was convinced that the appellant was lying to him and was conducting unauthorized transactions in his accounts. In total, 60 conversations were recorded between April and October 2008. In first instance, judge Benoît Emery overturned the appellant’s recourse. He allowed the respondent’s objection with the introduction into evidence of a series of audio recordings of telephone conversations between the respondent and the appellant.  Judge Emery considers that the recordings are not a technological document, but rather a material element that must be subject to separate proof to establish its authenticity and legal value.  In fact, Judge Emery states: “It is clear from listening to the recordings, that they are fraught with interruptions, cut-offs, or voluntary or non-voluntary deletions”, and therefore they are not authentic. He goes on to say: “[...] these incomplete and sometimes incoherent excerpts that, at times support the appellant’s cause, and at times, the respondent’s, seem to reveal everything and its opposite – wherein lies the need for evidence that is independent from its reliability and authenticity.2” The appellant is appealing the Superior Court judgement. He purports namely that the judge erred by declaring the audio recordings inadmissible as evidence. He reiterates the argument to the effect that the cassettes, on which audio recordings of the telephone conversations he had with the respondent were made, constitute technological documents within the meaning of the Act to Establish a Legal Framework for Information Technology3 (hereinafter “LFIT Act”).  He says that the cassettes benefit from the presumption of authenticity stipulated in article 2855 QCC and, consequently, it is the respondent’s responsibility to establish that this technological support does not ensure the integrity of the document and its authenticity. For his part, the respondent is rather of the opinion that the audio recordings on a magnetic medium do not fall within the scope of the LFIT Act. He therefore considers that it is the appellant’s responsibility to establish authenticity. The Court of Appeal says that the application and interpretation of the LFIT Act, which came into force in 2001, was never actually subject to the decisions of the courts and, therefore, it feels that it is useful to analyze the matter brought before it by the parties. The Court of Appeal was, in connection with this matter, facing a specific situation: in fact, in the first instance, the appellant had presented six (6) audio cassettes on which were recorded his conversations with the respondent, for a total recording duration of about six (6) hours. However, in appeal, the appellant had selected 50 excerpts of these conversations that he had transferred onto a CD for a listening duration of roughly one (1) hour. In other words, the appellant chose to substitute a CD for the cassettes produced in the Superior Court, under the same evidence docket (P-60). First off, judge Lévesque, who drafted the motives to which judges Dufresne and Healy subscribe to, qualifies these audio recordings in this matter as “material elements of proof”. He explains that when “a person is recorded without his knowledge during a telephone conversations or interview, this is considered a material element of proof, whereas a person who records himself and recites a dictation attempts instead to establish a testimony”4. Consequently, judge Lévesque reiterates that for a recording to be admitted as evidence, its authenticity must be proven 5. Consequently, the Court of Appeal asked the question as to whether the audio recording is a “technological document” within the meaning of the LFIT Act.  In this respect, judge Lévesque points to the existence of a doctrinal controversy that qualifies an audio recording on magnetic tape differently, more commonly referred to as a cassette, from a recording on a USB key or on CD. According to author Mark Philips, on whom the respondent is basing his argument, a cassette is not a “technological document” since the technology relative to the cassette is “analog”, whereas the most recent technologies are digital (such as magnetic hard drive, USB key, CD, etc.). According to this author, the definition given by the LFIT Act of a “technological document” therefore excludes analog documents. The Court of Appeal does not uphold the theory posited by author Mark Phillips.  It prefers the interpretation under which a recording on magnetic tape is considered a technological document. Despite the noted discrepancies in the text of article 2874 CCQ in comparison with those of the provisions in the LFIT Act, judge Lévesque considers it necessary to retain the interpretation that most closely complies with the purpose of the Act and the lawmaker’s intention.  He notes that the LFIT Act came into force in 2001, whereas the Civil Code of Québec was cam into effect ten (10) years before that.  Thus, on the one hand, this specific Act must take precedence over the provisions of the Civil Code, whose scope is more general in nature.  Moreover, judge Lévesque refers to two (2) case study maxims that make it possible to deduce the lawmaker’s intention: [77] Two case study maxims make it possible to deduce the lawmaker’s intention. Under the first, “precedence must be given to the most recent legislation, the legislative standard that is subsequent to the other standard in conflict”. In fact, when a new law is passed, the lawmaker is deemed to be aware of those laws that already exist. We could, therefore, presume that he wanted to implicitly repeal those standards that were not compatible with the new ones. The second principle stipulates that precedence must be given to the specific statute as compared with the statute of general application. The Court of Appeal therefore arrived at the conclusion that a recording on magnetic tape, such as a cassette, is a technological document.  More generally, it retains that a “technological document” must be considered a document whose medium uses information technologies, whether this medium is analog or digital6; Subsequently, the Court of Appeal examined articles 2855 and 2874 of the Civil Code, along with articles 5, 6 and 7 of the LFIT Act, in order to outline the principles applicable to the legal value to be assigned to a technological document. When is there presumption of authenticity? When is there presumption of integrity? When is there exemption of proof for a party when a technological document is introduced as proof? After analyzing various theories supported by different authors, the Court of Appeal retained the following regarding the procedure to follow when introducing a technological document as evidence: [99] […] articles 2855 and 2874 CCQ require the demonstration of distinct or separate proof of authenticity of a document presented as evidence. Thus, a technological document generally includes an inherent documentation, such as metadata, making it possible to identify an author, the date of creation, or even whether modifications were made to the document. Since such metadata constitute inherent proof of a technological document — and not a distinct or separate proof, as is required by the first part of articles 2855 and 2874 CCQ — and that they fulfill the same role as a traditional proof of authenticity, the lawmaker exempts that party from additionally establishing a separate proof. [100] Thus, article 7 LFIT Act does not create presumption of integrity of a document, but only a presumption that the technology used by its medium makes it possible to ensure its integrity, which I refer to as technological reliability. The nuance arises from the fact that an attack on the document’s integrity may come from various sources; for example, we can mention that the information may be altered or manipulated by an individual without technology being at fault. [101] Articles 2855 and 2874 CCQ indicate that a separate proof of authenticity is required in the case indicated in the third paragraph of article 5 EFIT., i.e., in the case where the medium or technology does not make it possible to either confirm or deny whether the document’s integrity is ensured. [102] Hence, the idea that a technological medium is deemed reliable (article 7 LFIT Act.) differs from the notion that such a medium may effectively ensure the document’s integrity (article 5 al. 3 LFIT Act.). It is a subtle distinction. A technology may, therefore, be reliable (7 LFIT Act.) without making it possible to affirm that we may conclude that the integrity of the document is ensured: this added insurance is provided by the technological documents that include an inherent documentation, or metadata, that prove the integrity of the document. [103] In other words, the exemption of proving the document’s authenticity applies where the medium or technology used make it possible to ensure the integrity of the document. This is not a case of presumed technological reliability under article 7 LFIT Act., but of the specific case of technological documents that include metadata and that, consequently, prove their own integrity. [104] However, in the absence of intrinsic documentation making it possible to ensure the document’s integrity, which is the case set out by article 5, al. 3 LFIT Act., the party that wants to produce such a document must establish this distinct traditional proof of its authenticity: […] [105] Thus, when an audio recording is accompanied by metadata and this documentation satisfies, in the court’s opinion, the authenticity requirement of the document, the party that produces this recording will be exempt from proving its authenticity. […] To summarize, the party seeking to present as evidence the audio recording must prove its authenticity7, but will not be required to prove the reliability of the technological medium used by virtue of the presumption established by article 7 LFIT Act. This article establishes a “presumption of reliability” of the technological medium by virtue of which the technology used makes it possible to ensure the document’s integrity.  This integrity itself is not presumed8. Applying these principles in the case under analysis, the Court of Appeal arrives at the conclusion that the judge of first instance erred by deciding that the cassettes did not constitute a technological document. It maintains, however, that the first judge was correct in affirming that the authenticity of the audio recordings must be proven for them to be accepted as evidence.  Therefore, in appeal, the appellant did not provide the same technological medium as that which was presented during the first instance. Six (6) cassettes were presented in the Superior Court, whereas one CD representing a summary of these recordings was presented instead in the Court of Appeal. Thus, it was not sufficient in the Court of Appeal to compare the technology and the different mediums of the proof presented, since it was impossible to distinguish the content of the cassettes from those of the CD in order to determine whether they presented the same information. By virtue of the rules of proof, the reproduction of an original may be made by a copy or a transfer9.  The copy shall be made on the same medium, whereas the transfer will be made on a technological medium that is different from the original.  Since the Court had no way to determine with certainty that the content of the CD was the same as that of the cassettes, it concluded that it simply did not have the same legal value. Lastly, the Court concluded that the appellant did not discharge his burden of demonstrating that the first instance judge had made an error that could justify their involvement. This ground of appeal was therefore rejected10. Overall, the Court of Appeal rejected in any case all of the other claims put forth by the appellant, noting that the latter faces a critical challenge: he was not persuasive. Our takeaway from this case is that the administration of a piece of evidence on a technological medium is no simple matter, and it must not be taken lightly. It is not easy to navigate the various provisions set out in both the Civil Code and the LFIT Act in order to extract the principles applicable to matters of proof. The Court of Appeal retains that the presumption of integrity set out in article 7 of the LFIT Act applies exclusively to the technological medium and not its content. It emphasizes that there should not be confusion between the integrity of a document and the capacity of a technology to ensure it.  Also, it suggests referring to the presumption set out in article 7 of the LFIT Act as a “presumption of technological reliability” instead of a “presumption of integrity of medium”. Lastly, it specifies that establishing the authenticity of an audio recording comprises two (2) components: 1)    The qualities related to the methods of creation; and, 2)    The qualities related to the information itself contained on the technological medium. A party seeking to dispute the reliability of a technological medium must, by virtue of article 89 of the CCQ, produce an affidavit “indicating in specific detail the facts and motives that make an attack on the integrity of the document likely”. An example of the administration of such technological proof may be found in the matter of Forest v. Industrial Alliance11. In this matter, photographs taken from the appellant’s Facebook account were submitted as an element of material proof. Attached was an affidavit, proclaiming the authenticity of the document, from the intern who took the screen capture. Regarding the identity of the informants, the appellant’s spouse confirmed, during the hearing, that it was in fact he himself who had taken the photographs in question. Since the opposing party did not offer any objection, the authenticity was established. While the Civil Code of Québec and its related laws strive to cover every situation that may arise in connection with presenting evidence on a technological medium, it is undeniable that technology is progressing at a rate that is far outpacing that set by lawmakers. That being the case, it is also the responsibility of attorneys to collaborate and innovate in the administration of their proof so as not to find themselves in an endless debate when seeking to determine the authenticity of specific evidence they are attempting to present.   2018 QCCA 608. Judgement on appeal, par. 97 CQLR, c. C-1.1. Paragraph 60 Art. 2855, the CCQ. 9, par. 119 of the decision. CCQ., art. 2855 and 2874. 9, par. 120 of the decision LFIT Act, art. 12, 15, 17 and 18 and CCQ, art. 2841. We should note that the other grounds of appeal presented by the appellant were all rejected as well, and that the Court, in a written judgement rendered by the Honorable Jacques J. Lévesque, j.c.a., rejected the appeal with legal fees. 2016 QCCS 497.

    Read more
  • Is an audio recording on magnetic tape a technological document?

    This publication was co-authored by Luc Thibaudeau, former partner of Lavery and now judge in the Civil Division of the Court of Québec, District of Longueuil. Despite its coming into force in 2001, the courts have frequently avoided commenting on the application and interpretation of the Act to Establish a Legal Framework for Information Technologies1 (hereafter the “LFIT Act”), preferring instead to rely on the provisions in the Civil Code of Québec2. In the decision of Benisty v. Kloda3, judge Jacques J. Levesque, rendering the ruling of the Court of Appeal, referred to the LFIT Act to conclude that a recording on magnetic tape is a technological document. The qualification of a technological document After having reviewed the doctrine and the case study law, judge Dufresne concluded that it would be erroneous to state that an audio recording on magnetic tape does not constitute a technological document4, as per the definition of this expression in subparagraphs 1 and 3 al. 4 of the LFIT Act.Hence, the testimony on a magnetic medium is a technological document5. In spite of article 2874 CCQ, which appears to suggest that a statement on magnetic tape may be something other than a technological document, one must retain the interpretation of technological document as set out in the LFIT Act, which is more recent and incidentally more detailed in terms of technology management and, in addition, “specifically dedicated to the consistency of treatment of various mediums”6. We should point out that the court is also of the opinion that the LFIT Act includes “all of the mediums, except paper and its physical equivalents”7. Presumption of integrity: integrity of medium or of content? The scope of the presumption of integrity set out in Article 7 of the LFIT was the subject of doctrinal controversy. Following a comparative reading of the articles in the CCQ with the LFIT Act provisions, the judges confirm that the exemption of the proof of the integrity of a technological document applies only to the medium, the technology, the system or process used8. The content of a technological document does not benefit from the presumption of integrity. In fact, according to the court, a medium or a process does not make it possible to infer, de facto, that the integrity of its content is ensured. The presumption of integrity is the presumption that “the technology used by its medium ensures its integrity”9. Thus, the exemption of proving authenticity applies “where the support or technology used make it possible to ensure the integrity of the document”10. The presumption of integrity does not apply to content. Without the intrinsic proof of the integrity of the technological document by means of the metadata or a convincing documentation, the party seeking to present this document as proof must establish separate proof of its authenticity, as set out in articles 2855, 2874 CCQ and article 5 al 3 of the LCCJTI. Proof of authenticity The decision of the Court of Appeal also confirms that technological documents do not benefit from a presumption of authenticity11. Moreover, when accompanied by their metadata, they will satisfy the requirement of authenticity.  This judgement also stipulates that authenticity consists of two components, “either (1) qualities related to the details regarding production and (2) qualities related to information”12. In the matter at hand, since the Appellant’s technological documents did not have the intrinsic documentation to ensure the integrity of the medium of the technological documents, proof of authenticity was required. Copy  Tranfer This judgement reiterates13 the distinction between the copy that is based on the same medium (art. 12 to 15 of the LFIT Act) and the transfer by ensuring that the medium is based on a different technology (art. 17 and 18 LFIT Act). The Appellant having in the case proceeded to transfer the information from audio cassette to CD, must document the transfer “so that it can be proven, as needed, that the document resulting from the transfer contains the same information as the source document”14. Since no element could help demonstrate that the recordings resulting from the transfer contained the same information, the court concluded that recordings on CD did not have the same legal value as those contained on cassette.  Denying the recordings to be admitted as proof is therefore confirmed by the Court of Appeal. Summary of principles:  An audio recording on magnetic tape is a technological document; This may be a material element of proof or a testimony, depending on the content of the recording; The technological document “must be considered a document whose medium uses information technologies, whether this medium is analog or digital”15; The presumption of integrity applies to the medium of the technological document; Moreover, the content of a technological document does not benefit from the presumption of integrity; When presenting an audio recording as testimony or as a material element, a separate proof of authenticity is required when the medium or technology used does not make it possible to confirm that the integrity of the medium is ensured; In order to establish proof of authenticity, a party must demonstrate the details related to the production and content of a technological document; and The reproduction of a document may be made by copy (on the same medium or on a medium that is not based on a different technology) or by means of a transfer (on a medium based on a different technology) provided that it is proven that the method of transfer does not impact the integrity.    Act to Establish a Legal Framework for Information Technologies, RSQ.,c. C-1.1 Civil Code of Québec, C.C.Q. 1991, c. 64 (the “CCQ”). Benisty v.Kloda, 2018 RLRQ 608 Ibid., par. 126 The court essentially refers to the thesis defended by authors Vincent Gautrais and Patrick Gingras: Vincent GAUTRAIS and Patrick GINGRAS, « La preuve des documents technologiques », in Barreau du Québec - Service de la Formation continue, Congrès annuel du Barreau du Québec, Montréal, 2012, online: https://edoctrine.caij.qc.ca/congres-du-barreau/2012/1755866973 (page consulted on April 24, 2018), p. 41 Vincent GAUTRAIS and Patrick GINGRAS, « La preuve des documents technologiques », in Barreau du Québec - Service de la Formation continue, Congrès annuel du Barreau du Québec, Montréal, 2012, online https://edoctrine.caij.qc.ca/congres-du-barreau/2012/1755866973 (page consulted on April 24, 2018), p. 41 Benisty v.Kloda, 2018 QCCA 608, par. 80 Ibid., para. 93. Ibid., par. 100. Ibid., par. 103 Ibid., par. 95 Ibid., par. 106 See for example: Directeur des poursuites criminelles et pénales v. 3341003 Canada inc.  (Pizzédélic Restaurant), 2015 QCCQ 8159; Tabert v. Equityfeed Corporation, 2017 QCCS 3303, B.L. v. Maison sous les arbres, 2013 QCCAI 150; Lefebvre Frères Ltd. v.aisona Giraldeau, 2009 QCCS 404. Act to establish the legal framework for information technology, R.S.Q., c. C-11., Art. 17. Benisty v.Kloda, 2018 QCCA 608, par. 119

    Read more
  • The Superior Court of Québec analyses the exception allowing the use
    of a work protected by copyright for the purpose of news reporting

    In Cedrom-SNi inc. v. Dose Pro inc. (“Cedrom-SNi”), the Superior Court of Québec rendered a decision which, although issued at the interlocutory stage, is of interest to Canada’s media and entertainment industry since it is one of the rare decisions which analyses the criteria for applying the exception allowing the use of a work for the purpose of news reporting1. In Québec, the Court of Québec (small claims division) has discussed this issue a few times, although without going into an in-depth analysis of the applicable criteria.2 Cedrom-SNi is the first case in which the Superior Court conducts such an analysis. The facts La Presse, Le Devoir and Le Soleil publish articles by their journalists online, making them available to the public. These three print media companies authorized Cedrom-SNi, under an exclusive licence, to reproduce and distribute their publications electronically for media monitoring purposes. Without being authorized to do so and without paying the plaintiffs, La Dose Pro began offering its customers, for a fee, press reviews reproducing the full titles and beginning lines of articles published by La Presse, Le Devoir and Le Soleil. La Dose Pro’s media review named the newspaper which had published the article as well as the date and time of publication, and provided a link allowing readers to access the complete article on the newspaper’s website. However, according to the evidence, La Dose Pro’s customers almost never visited the newspapers’ websites. The names of the journalists were generally not indicated and La Dose Pro did not create any content. Claiming that their copyright was being infringed, La Presse, Le Devoir, Le Soleil and Cedrom-SNi Inc. applied for an injunction to prevent La Dose Pro from reproducing and posting any article found on their respective websites. The law On July 24, 2017, Justice François P. Duprat issued a judgment regarding the application for an interlocutory injunction.3 In it, he analysed two main issues of interest respecting copyright. The first was whether the title and beginning lines of the articles published by La Presse, Le Devoir and Le Soleil were protected by copyright. The protection of a work under the Copyright Act (the “Act”) gives the author the sole right to produce or reproduce the entire work or any substantial part of it.4 Conversely, the author cannot claim the exclusive right to reproduce part of his work that is not substantial, which is what La Dose Pro argued in this case, claiming that the title and beginning lines of an article (which only include one to four sentences) do not constitute a substantial part of the work, which is the complete article. Referring to the leading case of Cinar Corporation v. Robinson5 (“Cinar”), the Court followed the teachings of the Supreme Court of Canada, which had ruled that what constitutes a substantial part of a work must be analysed according to a “qualitative” approach (based on originality) as opposed to a “quantitative” approach. As a general rule, a substantial part of a work is a part which represents a significant part of the author’s skill and judgment as expressed in the work. The Court held that the concept of “skill” includes relying on personal knowledge or an acquired aptitude or practice ability while the concept of “judgment” involves a capacity for discernment or ability to form an opinion or evaluation by comparing different possible options in producing the work, as described by the Supreme Court in CCH.6 The combination of skill and judgment thus implies some intellectual effort. Based on these principles, the Court ruled that the thought and work required to write the title and beginning lines of an article constitute creative work designed to catch the reader’s attention and nothing is left to chance. In this sense, La Dose Pro reproduced a significant part of the work. The fact that La Dose Pro’s clients almost never visit the La Presse, Le Devoir and Le Soleil websites confirms the importance of the title and beginning lines of the article, as they are generally enough to let the reader know what the article is about. The second issue analysed by the Court was whether La Dose Pro’s use of the title and beginning lines of the articles constituted fair dealing permitted under the Act. The Act sets out many exceptions allowing the use of a work protected by copyright which would otherwise constitute infringement. These exceptions may apply where a significant part of a work is used for the purpose of research, private study, education, parody or satire, criticism, review or news reporting.7 To take advantage of an exception, the user must be able to demonstrate that the work is used for one of the exceptions under the Act, which are interpreted broadly, and that the use is fair. For the exception of fair dealing for the purpose of criticism or news reporting to apply, the person reproducing the work must also mention its source and author. In this case, La Dose Pro argued that its actions constituted fair dealing of the works of La Presse, Le Devoir and Le Soleil for the purpose of news reporting under section 29.2 of the Act. After analysing the facts, the Court held that La Dose Pro reproduced the titles and beginning lines of articles other than in a news reporting context. In doing so, La Dose Pro did not provide any comments or discussion for the purpose of making the facts described in the articles known. According to the Court, this did not constitute news reporting. The Court also noted that La Dose Pro only rarely named the authors of the articles which it reproduced and distributed electronically, although they were available on the newspapers’ websites. As to fair dealing, the Superior Court referred to the six factors applied by the Supreme Court in CCH 8 as a foundation for its analysis of the facts: the purpose of the dealing, the character of the dealing, the amount of the dealing, the nature of the work, available alternatives to the dealing, and the effect of the dealing on the work. Regarding the first factor, the Superior Court held that La Dose Pro’s true goal was to generate a profit, not to inform the public since the excerpts were only available to its customers and did not generate traffic to the La Presse, Le Devoir or Le Soleil articles. As to the character of the dealing, multiple excerpts from the articles were broadly disseminated since many employees of the same customer could receive the media review. According to the Court, this constituted unfair dealing. With respect to the third factor, the amount of the dealing, the Court noted that La Dose Pro reproduced only a minimal part of the works, i.e. the title and beginning lines. However, the Court reiterated its conclusion regarding the first part of the test that the title and beginning lines represent a substantial part of the works. Regarding the fourth factor, the Court was of the view that there was an available alternative to the dealing since La Dose Pro could have created original content itself. The fifth factor involves the nature of the work. According to this criterion, the Court must determine whether the use of the work helps to pursue the copyright purpose and aims. On this point, the Court was of the view that, although it is in the interest of the newspapers that the articles be widely distributed to the public, the distribution in question did not increase traffic to their websites. Regarding the last criterion, the effect of the dealing on the work, the Court held that since the use did not generate additional traffic to the websites, it did not generate any revenues for the newspapers. After analysing all the factors, the Court held that the use of the titles and beginning lines in this case was unfair dealing. In its opinion, La Dose Pro’s main motivation was to make a profit through the use of the newspapers’ business model of allowing free access to the works and their reproduction. Conclusion Many decisions discuss the issue of what constitutes the reproduction of a significant part of a work. Although the Cedrom-SNi inc. decision was rendered at the interlocutory stage and does not change the state of the law, it represents a relevant example of how this issue applies in the context of new technology.   Cedrom-SNI inc. v. Dose Pro inc., 2017 QCCS 3383. Saad v. Le Journal de Montréal, 2017 QCCQ 122, para. 29 à 31; Clinique de lecture et d’écriture de La Mauricie inc. v. Groupe TVA inc., 2008 QCCQ 4097 (CanLII), paras. 14 and 15. An interlocutory judgment only settles the dispute pending a final judgment. It is based on the colour of right rather than the demonstration of a clear right, which will be made at the trial on the merits in this case. R.S.C., 1985, c. C-42, s. 3. Cinar Corporation c. Robinson, [2013] 3 SCR 1168, 2013 SCC 73. CCH Canadian Ltd. v. Law Society of Upper Canada, [2004] 1 SCR 339, 2004 SCC 13. Copyright Act, supra, footnote 4, s. 29 to 29.2. CCH Canadian Ltd. v. Law Society of Upper Canada, supra footnote 6.

    Read more
  • Intellectual Property and Artificial Intelligence

    Although artificial intelligence has been evolving constantly in the past few years, the law sometimes has difficulty keeping pace with such developments. Intellectual property issues are especially important: businesses investing in these technologies must be sure that they can take full advantage of the commercial benefits that such technologies provide. This newsletter provides an overview of the various forms of intellectual property that are applicable to artificial intelligence. The initial instinct of many entrepreneurs would be to patent their artificial intelligence processes. However, although in some instances such a course of action would be an effective method of protection, obtaining a patent is not necessarily the most appropriate form of protection for artificial intelligence or software technologies generally. Since the major Supreme Court of the United States decision in Alice Corp. v. CLS Bank International1, it is now acknowledged that applying abstract concepts in the IT environment will not suffice to transform such concepts into patentable items. For instance, in light of that decision, a patent that had been issued for an expert system (which is a form of artificial intelligence) was subsequently invalidated by a U.S. court.2 In Canada, case law has yet to deal specifically with artificial intelligence systems. However, the main principles laid down by the Federal Court of Appeal in Schlumberger Canada Ltd. v. Canada (Commissioner of Patents)3 are still relevant to the topic. In that case, it was decided that a method of collecting, recording and analyzing data using a computer programmed on the basis of a mathematical formula was not patentable. However, in a more recent ruling, the same Court held that a data-processing technique may be patentable if it “[…] is not the whole invention but only one of a number of essential elements in a novel combination.”4 The unpatentability of an artificial intelligence algorithm in isolation is therefore to be expected. In Europe, according to Article 52 of the 1973 European Patent Convention, computer programs are not patentable. Thus the underlying programming of an artificial intelligence system would not be patentable under this legal system. Copyright is perhaps the most obvious form of intellectual property for artificial intelligence. Source codes have long been recognized as “works” within the meaning of the Canadian Copyright Act and in similar legislation in most other countries. Some jurisdictions have even enacted laws specifically aimed at software protection.5 On this issue, an earlier Supreme Court of Canada ruling in Apple Computer, Inc. v. Mackintosh Computers Ltd6 is of some interest: In that case, the Court held that computer programs embedded in ROM (read only memory) chips are works protected by copyright. A similar conclusion was reached earlier by a US Court.7 These decisions are meaningful with respect to artificial intelligence systems because they extend copyright protection not only to the codes programmed in complex languages or on advanced artificial intelligence platforms but also to the resulting object code, even on electronic media such as ROM chips. Copyright however does not protect ideas or the general principles of a particular code; it only protects the expression of those ideas or principles. In addition to copyright, the protection afforded by trade secrets should not be underestimated. More specifically, in the field of computer science, it is rare for customers to have access to the full source code. Furthermore, in artificial intelligence, source codes are usually quite complex, and it is precisely such technological complexity that contributes to its protection.8 This approach is particularly appealing for businesses providing software as a remote service. In these cases, users only have access to an interface, never to the source code or the compiled code. Therefore, it is almost impossible to reverse engineer such technology. However, when an artificial intelligence system is protected only by the concept of trade secret, there is always the risk that a leak originating with one or more employees will allow competitors to learn the source code, its structure or its particularities. It would be nearly impossible to prevent a source code from circulating online after such a leak. Companies may attempt to bolster the protection of their trade secrets with confidentiality agreements, but unfortunately this is insufficient where employees act in bad faith or in the case of industrial espionage. It would therefore be wise to implement knowledge-splitting measures within a company, so that only a restricted number of employees have access to all the critical information. Incidentally, it would be strategic for an artificial intelligence provider to make sure that its customers highlight its trademark, like the “Intel Inside” cooperative marketing strategy, to promote its system with potential customers. In the case of artificial intelligence systems sold commercially, it is also important to consider intellectual property in the learning outcomes of the systems resulting from its use. This raises the issue of ownership. Does a database generated by an artificial intelligence system developed by a software supplier while being used by one of its customers belong to the supplier or to this customer? Often, the contract between the parties will govern the situation. However a business may legitimately wish to retain the intellectual property in the databases generated by its internal use of the software, specifically where it provides it with its operational data or where it “trains” the artificial intelligence system through interaction with its employees. The desire to maintain the confidentiality of databases resulting from the use of artificial intelligence would suggest that they are assimilable to trade secrets. However, whether such databases are considered works in copyright law would be determined on a case-by-case basis. The court would also have to determine if the databases are the product of the exercise of the skill and judgment of one or more authors, as required by Canadian jurisprudence order to constitute “works”.9 Although situations where employees “train” an artificial intelligence system are more readily assimilable to an exercise of skill and judgment, cases where databases are constituted autonomously by a system could escape copyright protection “No copyright can subsist in […] data. The copyright must exist in the compilations analysis thereof”.10 In addition to the issues raised above, is the more prospective issue of the inventions created by artificial intelligence systems. So far, such systems have been used to identify research areas with opportunities for innovation. For example, data mining systems are already used to analyze patent texts, ascertain emerging fields of research, and even find “available” conceptual areas for potential patents.11 Artificial intelligence systems may be used in coming years to mechanically draft patent applications including patent claims covering potentially novel inventions.12 Can artificial intelligence have intellectual property rights, for instance, with respect to patents or copyrights? This is highly doubtful given that current legislation attributes rights to inventors and creators who must be natural persons, at least in Canada and the United States.13 The question then arises, would the intellectual property of the invention be granted to the designers of the artificial intelligence system? Our view is that at present the law is inappropriate in this regard because historically, in the area of patents, intellectual property was granted to the inventive person, and in the area of copyright, to the person who exercised skill and judgment. We also query whether a patent would be invalidated or a work enter the public domain on the ground that a substantial portion is generated by artificial intelligence (which is not the case in this newsletter!). Until that time, lawyers should familiarize themselves with the underlying concepts of artificial intelligence, and conversely, IT professionals should familiarize themselves with the concepts of intellectual property. For entrepreneurs who design or use artificial intelligence systems, constant consideration of intellectual property issues is essential to protect their achievements. Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal particularities, particularly the various branches and applications of artificial intelligence that will rapidly appear in all businesses and industries.   573 U.S._, 134 S. Ct. 2347 (2014). Vehicle Intelligence and Safety v. Mercedes-Benz, 78 F. Supp.3d 884 (2015), maintenue en appel Federal Circuit. No. 2015-1411 (U.S.). [1982] 1 C.F. 845 (C.A.F.). Canada (Procureur général) v. Amazon.com, inc., [2012] 2 RCF 459, 2011 CAF 328. For example, in Brazil: Lei do Software No. 9.609 du 19 février, 1998; en Europe : Directive 2009/24/CE concernant la protection juridique des programmes d’ordinateur. [1990] 2 RCS 209, 1990 CanLII 119 (CSC). Apple Computer, Inc. v. Franklin Computer Corp., 714 F.2d 1240 (3d Cir. 1983) (U.S.). Keisner, A., Raffo, J., & Wunsch-Vincent, S. (2015). Breakthrough technologies-Robotics, innovation and intellectual property (No. 30). World Intellectual Property Organization- Economics and Statistics Division. CCH Canadian Ltd. v. Law Society of Upper Canada, 2004 CSC 13, [2004] 1 RCS 339. See, for example: : Geophysical Service Incorporated v. Canada-Nova-Scotia Offshore Petroleum Board, 2014 CF 450. See, for example: : Lee, S., Yoon, B., & Park, Y. (2009). An approach to discovering new technology opportunities: Keyword-based patent map approach. Technovation, 29(6), 481-497; Abbas, A., Zhang, L., & Khan, S. U. (2014). A literature review on the state-of-theart in patent analysis. World Patent Information, 37, 3-13. Hattenbach, B., & Glucoft, J. (2015). Patents in an Era of Infinite Monkeys and Artificial Intelligence. Stan. Tech. L. Rev., 19, 32. Supra, note 7.

    Read more
  • Deceptive Online Marketing Practices: Intermediaries, what is your legal exposure?

    In recent decades, online advertising has become the single most efficient and interactive way to reach consumers and assess their behaviour. While television and print audiences continue to dwindle and overall marketing strategies that focus on these mediums are less able to effectively measure and assess performance, online advertising targets a growing market whose technological medium allows for the direct measurement of a marketing campaign’s success. These changes in the marketing world, exciting and new as they may be, pose an important set of legal risks. In using and displaying online ads, merchants and intermediaries should be wary of consumer protection and competition laws, both provincial and federal, so as to avoid unpleasant surprises in the form of pricy sanctions and lawsuits. The law may not have evolved as much as the technologies, but its broad language can adapt to the modern reality so as to protect those who receive the merchant’s new messages. The two main types of online marketing are Search Engine Marketing (“SEM”) and Social Media Marketing (“SMM”). Search engine companies index website content to organize and present the available information in an understandable format. Merchants offering products at retail can present themselves on top of these rankings, targeting specific keywords consumers search for. SMM is a form of display advertising that allows advertisers to present their services in an engaging manner on various popular social media platforms. This targeted conversation with consumers increases brand awareness and provides insights and feedback. Both SEM and SMM represent advertising formats governed by law. Provincial legislation The Consumer Protection Act1 of Québec (“CPA”) regulates and governs advertising activities in the Province of Québec. Namely, the CPA prohibits false or misleading advertising. The provisions of the CPA are aimed at both the merchants and the actors of the advertising industry. The CPA indicates that “no merchant, manufacturer or advertiser may, by any means whatever, make false or misleading representations to a consumer.”2 This prohibition applies to all media including print, radio and television, the Internet being no exception. The Province also enacted the Act to establish a legal framework for information technology3 (“LCCJIT”), in force since 2001, which provides for the liability of online intermediaries such as search engines and website hosts, in a context that is not specific to advertising. Indeed, Justice Rochon in the Court of Appeal case of Prud’homme c. Rawdon4 explains that while “a contributory fault may be committed by third parties who communicate, broadcast, or host the information [...] sections 22, 26, 36 and 37 of the Act to establish a legal framework for information technology (R.S.Q., c. C-1.1) appear [...] to reduce if not remove certain third parties from any liability.” Section 22 establishes that a non-search engine host will be exempt from liability unless it has knowledge that the information it stores is being used for illegal activity or if it does not promptly act to impede access to such illegal documentation. Similarly, a search engine will be liable if it has knowledge that the service it provides enables illegal activity and if it does not promptly cease to provide such a service to the people it knows are engaged in that activity. Either way, the determining factor is knowledge. Section 27 of this same Act states that: A service provider, acting as an intermediary, that provides communication network services or who stores or transmits technology-based documents on a communication network is not required to monitor the information communicated on the network or contained in the documents or to identify circumstances indicating that the documents are used for illicit activities. As such, knowledge is not presumed and hence, there is an implicit necessity of notifying the intermediary of the existence of such illicit content. Once such notice is given, the intermediary, as defined in section 22, must act promptly to take down the content or limit access to it. Federal legislation The Competition Act5 (“CA”) regulates most business conduct in Canada, its main purpose being to prevent anti-competitive practices in the marketplace. The CA prohibits false or misleading representation and deceptive marketing practices in promoting the supply of a product or any business interest. Moreover, persons who “caused the representation to be made” are held liable for false or misleading representations or deceptive practices. This implies that not only is liability imposed on the person who crafts misleading or false advertisement, but also on the person who permits a representation to be made or sent. The “Enforcement Guidelines – Application of the Competition Act to Representations on the Internet” mention that, in the online environment, the Competition Bureau will be called upon to consider the respective roles of the different intermediaries involved in advertising on the Internet. It is further explained that: [i]n its enforcement efforts, the Bureau focuses on the party who “causes” the representation to be made. Determining causation requires an analysis of the facts to ascertain which player possesses decision-making authority or control over content and to assess the nature and degree of their authority and control.6 [our emphasis]   Thus, the level of liability attributed to a given party will largely depend on the level of control they have over the content and whether they played a part in deciding whether the ad ran or not. Under the CA, there are two adjudicative regimes which sanction false or misleading representations: a civil track or the criminal track. The civil regime applies to most instances of misleading representations and deceptive marketing practices since the burden of proof is lighter. The general criminal process, however, covers “the most egregious matters” and requires that a component of criminal intent be demonstrated.7 Potential liability for advertising generators Merchant The merchant is the party that has the authority to decide whether an ad is run or not. As such, it is usually easiest to attribute liability to the merchant, who is the party most often held liable for deceptive marketing practices, be it with respect to the CPA or the CA. Media planning agency The media planning agency can have a dual role. That is, it may act as a creative agency that creates the advertisement (liable under the CA) and/or may assist an advertiser in determining which media to use, be it television programs, newspapers, bus-stop posters, in-store displays, banner ads on the web, or a flyer on Facebook. The liability of an Internet media planning agency will naturally depend on the exact role it plays in the advertisement. In terms of the criterion of authority and control, if the agency acts as “the creative” and is responsible for the ad content, then it is likely to be held liable for false or misleading representation. If, on the contrary, the media planning agency is solely responsible for speculating on the viewer demographics and accordingly elaborating strategies as to which media would be the most effective to run ads in, then it is not likely to be held liable for deceptive marketing practices. As the agency’s participation increases so does its duty of care.8 Participation, in the eyes of the Federal Trade Commission and the courts is taken to mean when the agency carries out the will of the advertiser.9 Ultimately, whether an agency’s participation is “active” depends on a case-bycase analysis.10 There could also be instances where the agency would be responsible towards the merchant. Potential liability for advertising disseminators Media placement agency The media placement agency, also known as media buyer, is responsible for the negotiation and placement of the media campaign. Its role includes optimizing and evaluating the ad’s effectiveness both during and after the advertising campaign’s completion. Additionally, the media placement agency generates added value by either negotiating lower rates with the host or adding layers of behavioural or location based targeting through ad-platforms (typically not liable). Website or web page host The host, also known as the publisher, is an entity that owns a web page or a website and that, in exchange for some economic compensation, is willing to publish ads of other parties in some spaces of its page or site. LCCJTI establishes that a non-search engine host will be exempt from liability unless it has knowledge that the information it stores is being used for illegal activity or if it does not promptly act to impede access to such illegal documentation. Similarly, a search engine will be liable if it has knowledge that the service it provides enables illegal activity and if it does not promptly cease to provide such a service to the people it knows are engaged in that activity. Either way, the determining factor is knowledge. With respect to the CA, a host may benefit from the publisher’s defense and not be held liable in a civil suit provided it does not knowingly or recklessly partake in or allow false or misleading advertising. The take-away In navigating new online marketing strategies, one should bear in mind that as efficient as online advertising can be, it has also significantly contributed to a rise in the potential for false or misleading representations. The threshold in assessing false or misleading representations is particularly low, as it is evaluated from the average consumer’s perspective, i.e. a “credulous and inexperienced” consumer.11 Although the various players in the marketing world are no strangers to the concept of false or misleading advertising, they should be cautious in using new forms of marketing so as not to go beyond this low threshold. Indeed, there are certain considerations that are specific to the Internet medium, which involve among other things, the speed and efficiency with which consumers can perceive ads. One should also be wary of legislative changes regarding consumer protection and competition law. For instance, Bill 13412 has recently set out to amend the CPA so as to prohibit merchants from “falsely or misleadingly representing to consumers that credit may improve their financial situation or that credit reports prepared about them will be improved.”13 There is a considerable volume of credit offering advertising that circulates via SEM and SMM. Advertisers should be cautious and make sure they respect these measures once they are enacted as well as the other legislative tools mentioned in this publication. R.S.Q., c. P-40.1. Ibid., s. 219. R.S.Q., c. C-1.1. 2010 QCCA 584, para 75 [Unofficial English translation]. R.S.C. 1985, c. C-34. Innovation Government of Canada, “Application of the Competition Act to Representations on the Internet”, (October 16, 2009). Innovation Government of Canada, “Misleading Representations and Deceptive Marketing Practices: Choice of Criminal or Civil Track under the Competition Act”, (September 22, 1999). Kelley Drye & Collier Shannon, “Ad Agency Liability” (2005) Ad Law Advisory. Ibid. Ibid. Richard v. Time Inc., 2012 SCC 8, [2012] 1 SCR 265, para 78. An Act mainly to modernize rules relating to consumer credit and to regulate debt settlement service contracts, high-cost credit contracts and loyalty programs, Bill 134 (Introduction – May 2, 2017), 1st Sess., 41st Legis (QC). Ibid., Explanatory Notes.

    Read more
  • “Like our Facebook page and you could win a tablet computer” - are you following the rules?

    Promotional contests are among the advertising activities favoured by businesses. In the age of social media, they are increasingly frequent and popular — “Win a trip down South!”, “Fantastic stroller to be won among everyone who likes our Facebook page!”. However, not everyone is aware of all the rules applicable to contests of this kind. It is important to know that in Canada and Québec a number of laws govern promotional contests. It is crucial that the rules for such contests be drafted in accordance with legislative requirements in order to minimize the risks of legal action and bad publicity. Legislation applicable throughout Canada “No purchase necessary” Under the Criminal Code, one may not require of a consumer that he or she purchases a product or service, or pay any other valuable consideration, to be entitled to participate in a promotional contest. The organizer of a contest must therefore include the statement “no purchase necessary” in the rules, and provide for another manner of participating, for example by way of a hand-written letter sent to the contest organizer. It is crucial to abide by the duties set forth in the Criminal Code, as offences are punishable by fine and even imprisonment. Skill-testing question The Criminal Code also provides that winners may not be determined by mere chance. It is in order to comply with this requirement that it has become common practice for organizers of promotional contests to require of a participant that he or she correctly answers a mathematical skill-testing question. Mandatory disclosures The Competition Act also contains provisions applicable to Canadian promotional contests aimed at ensuring fair competition. It is crucial that the contents of a promotional contest’s rules meet the requirements of the Act, for instance by disclosing the number and approximate value of the prizes, the regional distribution of the prizes, the type of skill-testing question required, details concerning the chances of winning, the date the contest closes, and any fact within the knowledge of the contest organizers that affects materially the chances of winning. Furthermore, when announcing your promotional contest (over the radio, on social media, at a product’s point-of-purchase, etc.), you must also disclose the specific items mentioned above. Penalties Note that a breach of the rules on adequate and fair disclosure in connection with the organization of a promotional contest can result in an order being issued against you, compelling you to comply with the applicable legal requirements, to issue a corrective notice and/or to pay an administrative monetary penalty of up to, for the first order, $750,000 in the case of an individual, or $10,000,000 in the case of a corporation. Legislation applicable in Québec In Québec, in addition to the legislation applicable throughout Canada, the Act respecting lotteries, publicity contests and amusement machines (the “Publicity Contests Act”) and the Rules respecting publicity contests provide for the application of a specific legal regime to most promotional contests (publicity contests) in the province. Contests in which the value of prizes exceeds $100 If you organize a promotional contest in which the total value of the prizes exceeds $100, you must conform to the requirements set out in the Rules respecting publicity contests. Many items must imperatively be included in the text of the contest rules and advertisements, some of which are identical to those required by the Competition Act. Moreover, before the publicity contest is launched, you must pay the duties owed to the Régie des alcools, des courses et des jeux (the Liquor, Racing and Gaming Board) (the “Board”). The amount of such duties is based on the total value of the prizes and the place of residence of the participants. You must also send to the Board a notice of the holding of a publicity contest within a time limit which will vary based on the total value of the prizes offered. Contests in which the value of prizes exceeds $2,000 If you organize a contest in which the total value of the prizes exceeds $2,000, several other rules apply, including the obligation to file with the Board the contest rules and the text of any advertisements within the prescribed time limits. You may also be compelled in certain cases to furnish security. You will also have to communicate with the Board if you want to change or cancel a contest after it has been launched. Note that the naming of a prize winner does not result in a release from the duties owed to the Board. For one thing, a written report must be filed with the Board within 60 days following the date on which the prize winner is named. Furthermore, certain documents enabling the Board to verify whether the contest has been properly carried on must be kept for 120 days following the date on which a winner is named. Contests directed at children One thing not to forget is that the Consumer Protection Act prohibits all advertising directed at children under 13 years of age, which includes publicity contests. French version Under the Charter of the French Language, the rules and the items required to be disclosed in the advertisements concerning the contest must be published in French, including in connection with publicity contests held exclusively on-line. Rules applicable to social media You are considering organizing a contest on Facebook, Instagram or Twitter? Note that many rules govern contests on these platforms and that these rules can be amended at any time. Hence the importance, before every contest launch, of reviewing the applicable rules, since certain social media platforms may unilaterally decide to shut down your business’s page if you do not comply with them. Conclusion The regime applicable to persons organizing promotional contests in Québec is particularly restrictive. Contest organizers are well advised, as a matter of precaution, to include provisions to protect themselves, in addition to all the items required by law. The running of promotional contests involves a great many pitfalls, which your legal advisor will help you successfully avoid.

    Read more
  • When artificial intelligence is discriminatory

    Artificial intelligence has undergone significant developments in the last few years, particularly in respect of what is now known as deep learning.1 This method is the extension of the neural networks which have been used for a few years for machine learning. Deep learning, as any other form of machine learning, requires that the artificial intelligence system be placed before various situations in order to react to situations which are similar to previous experiences. In the context of business, artificial intelligence systems are used, among other things, to serve the needs of customers, either directly or by supporting employees interventions. The quality of the services that the business provides is therefore increasingly dependent on the quality of these artificial intelligence systems. However, one must not make the mistake of assuming that such a computer system will automatically perform its tasks flawlessly and in compliance with the values of the business or its customers. For instance, researchers at the Carnegie Mellon University recently demonstrated that a system for presenting targeted advertising to Internet users systematically offered less well-paid positions to women than to men.2In other words, this system behaved in what could be called a sexist way. Although the researchers could not pinpoint the origin of the problem, they were of the view that it was probably a case of loss of control by the advertising placement services supplier over its automated system and they noted the inherent risks of large-scale artificial intelligence systems. Various artificial intelligence systems have had similar failures in the past, demonstrating racist behaviour, even to the point of forcing an operator to suspend access to its system.3 In this respect, the European Union passed in April 2016 a regulation pertaining to the processing of personal information which, except in some specific cases, prohibits automated decisions based on some personal data, including the “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation […]”.4 Some researchers wonder about the application of this regulation, particularly as discrimination appears in an incidental manner, without the operator of the artificial intelligence system intending it.5 In Québec, it is reasonable to believe that a business which would use an artificial intelligence system that would act in a discriminatory manner within the meaning of the Charter of Human Rights and Freedoms would be exposed to legal action even in the absence of a specific regulation such as that of the European Union. Indeed, the person responsible for an item of property such as an artificial intelligence system could incur liability in respect of the harm or damage caused by the autonomous action of such item of property. Furthermore, the failure to having put in place reasonable measures to avoid discrimination would most probably be taken into account in the legal analysis of such a situation. Accordingly, special vigilance is required when the operation of an artificial intelligence system relies on data already accumulated within the business, data from third parties (particularly what is often referred to as big data), or when the data will be fed to the artificial intelligence system by employees of the business or its users during the course of a “learning” period. All these data sources, which incidentally are subject to obligations under privacy laws, may be biased at various degrees. The effects of biased sampling are neither new nor are they restricted to the respect of human rights. It is a phenomenon which is well-known by statisticians. During the WW II, the U.S. Navy asked a mathematician named Abraham Wald to provide them with statistics on the parts of bomber planes which had been most hit for the purpose of determining what areas of these planes should be reinforced. Wald demonstrated that the data on the planes returning from missions was biased, as it did not take into account the planes that were taken down during these missions. The areas damaged on the returning planes did not need to be reinforced, rather the places which were not hit were the one that had to be. In the context of the operation of a business, an artificial intelligence system to which biased data is fed may thus make erroneous decisions – with disastrous consequences for the business on a human, economic and operation point of view. For instance, if an artificial intelligence system undergoes learning sessions conducted by employees of the business, their behaviour will undoubtedly be reflected in the system’s own subsequent behaviour. This may be apparent in the judgments made by the artificial intelligence system in respect of customer requests, but also directly in its capacity to adequately solve the technical problems submitted to it. Therefore, there is the risk of perpetuating the problematic behaviour of some employees. Researchers of the Machine Intelligence Research Institute have proposed various approaches to minimize the risks and make the machine learning of artificial intelligence systems consistent with its operator’s interests.6 According to these researchers, it would certainly be appropriate to adopt a prudent approach as to the objectives imposed on such systems in order to avoid them providing extreme or undesirable solutions. Moreover, it would be important to establish informed supervision procedures, through which the operator may ascertain that the artificial intelligence system performs, as a whole, in a manner consistent with expectations. From the foregoing, it must be noted that a business wishing to integrate an artificial intelligence system in its operations must take very seriously the implementation phase, during which the system will “learn” what is expected of it. It will be important to have in-depth discussions with the supplier on the operation and performance of his technology and to express as clearly as possible in a contract the expectations of the business as to the system to be implemented. The implementation of the artificial intelligence system in the business must be carefully planned and such implementation must be assigned to trustworthy employees and consultants who possess a high level of competence with respect to the relevant tasks. As to the supplier of the artificial intelligence system, it must be ensured that the data provided to him is not biased, inaccurate or otherwise defective, in such a way that the objectives set out in the contract as to the expected performance of the system may reasonably be reached, thus minimizing the risk of litigation arising from discriminatory or otherwise objectionable behaviour of the artificial intelligence system. Not only such litigation can be expensive, it could also harm the reputation of both the supplier and its customer. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598-617). IEEE; Datta, A., Tschantz, M. C., & Datta, A. (2015). Also see: Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112. Reese, H. (2016). Top 10 AI failures of 2016. The case of Tay, Microsoft’s system, has been much discussed in the media. Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Goodman, B., & Flaxman, S. (2016, June). EU regulations on algorithmic decision-making and a “right to explanation”. In ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Taylor, J., Yudkowsky, E., LaVictoire, P., & Critch, A. (2016). Alignment for advanced machine learning systems . Technical Report 20161, MIRI.

    Read more
  • Artificial intelligence and its legal challenges

    Is there a greater challenge than to write a legal article on an emerging technology that does not exist yet in its absolute form? Artificial intelligence, through a broad spectrum of branches and applications, will impact corporate and business integrity, corporate governance, distribution of financial products and services, intellectual property rights, privacy and data protection, employment, civil and contractual liability, and a significant number of other legal fields. What is artificial intelligence? Artificial intelligence is “the science and engineering of making intelligence machines, especially intelligent computer programs”.1 Essentially, artificial intelligence technologies aim to allow machines to mimic “cognitive” functions of humans, such as learning and problem solving, in order for them to conduct tasks that are normally performed by humans. In practice, the functions of artificial intelligence are achieved by accessing and analyzing massive data (also known as “big data”) via certain algorithms. As set forth in a report published by McKinsey & Company in 2013 on disruptive technologies, “[i]mportant technologies can come in any field or emerge from any scientific discipline, but they share four characteristics: high rate of technological change, broad potential scope of impact, large economic value that could be affected, and substantial potential for disruptive economic impact”.2 Despite the interesting debate over the impact of artificial intelligence on humanity,3 the development of artificial intelligence has been on an accelerated path in recent years and we witnessed some major breakthroughs. In March 2016, Google’s computer program AlphaGo beat a world champion Go player, Lee Sedol, by 4 to 1 in the ancient Chinese board game. The breakthroughs reignited the world’s interest in artificial intelligence. Technology giants like Google and Microsoft, to name a few, have increased their investments in the research and development of artificial intelligence. This article will discuss some of the applications of artificial intelligence from a legal perspective and certain areas of law that will need to adapt - or be adapted - to the complex challenges brought by current and new developments in artificial intelligence. Legal challenges Artificial intelligence and its potential impacts have been compared to those of the Industrial Revolution, a form of transition to new manufacturing processes using new systems and innovative applications and machines. Health care L’intelligence artificielle est certes promise à un bel avenir dans le Artificial intelligence certainly has a great future in the health care industry. Applications of artificial intelligence with abilities to analyze massive data can make such applications a powerful tool to predict drug performance and help patients find the right drug or dosage that matches with their situation. For example, IBM’s Watson Health program “is able to understand and extract key information by looking through millions of pages of scientific medical literature and then visualize relationships between drugs and other potential diseases”.4 Some features of artificial intelligence can also help to verify if the patient has taken his or her pills through an application on smartphones, which captures and analyzes evidence of medication ingestion. In addition to privacy and data protection concerns, the potential legal challenges faced by artificial intelligence applications in the health care industry will include civil and contractual liabilities. If a patient follows the recommendation made by an artificial intelligence system and it turns out to be the wrong recommendation, who will be held responsible? It also raises legitimate complex legal questions, combined with technological concerns, as to the reliability of artificial intelligence programs and software and how employees will deal with such applications in their day-to-day tasks. Customer services A number of computer programs have been created to make conversation with people via audio or text messages. Companies use such programs for their customer services or for entertainment purposes, for example in messaging platforms like Facebook, Messenger and Snapchat. Although such programs are not necessarily pure applications of artificial intelligence, some of their features, actual or in development, could be considered as artificial intelligence. When such computer programs are used to enter into formal contracts (e.g., placing orders, confirming consent, etc.), it is important to make sure the applicable terms and conditions are communicated to the individual at the end of the line or that a proper disclaimer is duly disclosed. Contract enforcement questions will inevitably be raised as a result of the use of such programs and systems. Financial industry and fintech In recent years, many research and development activities have been carried out in the robotic, computer and tech fields in relation to financial services and the fintech industry. The applications of artificial intelligence in the financial industry will vary from a broad spectrum of branches and programs, including analyzing customers’ investing behaviours or analyzing big data to improve investment strategies and the use of derivatives. Legal challenges associated with artificial intelligence’s applications in the financial industry could be related, for example, to the consequences of malfunctioning algorithms. The constant relationship between human interventions and artificial intelligence systems, for example, in a stock trading platform, will have to be carefully set up to avoid, or at least confine, certain legal risks. Autonomous vehicles Autonomous vehicles are also known as “self-driving cars”, although the vehicles currently permitted to be on public roads are not completely autonomous. In June 2011, the state of Nevada became the first jurisdiction in the world to allow autonomous vehicles to operate on public roads. According to Nevada law, an autonomous vehicle is a motor vehicle that is “enabled with artificial intelligence and technology that allows the vehicle to carry out all the mechanical operations of driving without the active control or continuous monitoring of a natural person”.5 Canada has not adopted any law to legalize autonomous cars yet. Among the significant legal challenges facing autonomous cars, we note the issues of liability and insurance. When a car drives itself and an accident happens, who should be responsible? (For additional discussion of this subject under Québec law, refer to the Need to Know newsletter, “Autonomous vehicles in Québec: unanswered questions” by Léonie Gagné and Élizabeth Martin-Chartrand.) We also note that interesting arguments will be raised respecting autonomous cars carrying on commercial activities in the transportation industry such as shipping and delivery of commercial goods. Liability regimes The fundamental nature of artificial intelligence technology is itself a challenge to contractual and extra-contractual liabilities. When a machine makes or pretends to make autonomous decisions based on the available data provided by its users and additional data autonomously acquired from its own environment and applications, its performance and the end-results could be unpredictable. In this context, Book Five of the Civil Code of Québec (CCQ) on obligations brings highly interesting and challenging legal questions in view of anticipated artificial intelligence developments: Article 1457 of the CCQ states that: Every person has a duty to abide by the rules of conduct incumbent on him, according to the circumstances, usage or law, so as not to cause injury to another. Where he is endowed with reason and fails in this duty, he is liable for any injury he causes to another by such fault and is bound to make reparation for the injury, whether it be bodily, moral or material in nature. He is also bound, in certain cases, to make reparation for injury caused to another by the act, omission or fault of another person or by the act of things in his custody. Article 1458 of the CCQ further provides that: Every person has a duty to honour his contractual undertakings. Where he fails in this duty, he is liable for any bodily, moral or material injury he causes to the other contracting party and is bound to make reparation for the injury; neither he nor the other party may in such a case avoid the rules governing contractual liability by opting for rules that would be more favourable to them. Article 1465 of the CCQ states that: The custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault. The issues of foreseeable damages or direct damages, depending on the liability regime, and of the “autonomous act of the thing” will inescapably raise interesting debates in the context of artificial intelligence applications in the near future. In which circumstances the makers or suppliers of artificial intelligence applications, the end-users and the other parties benefiting from such applications could be held liable – or not – in connection with the results produced by artificial intelligence applications and the use of such results? Here again, the link between human interventions - or the absence of human interventions - with artificial intelligence systems in the global chain of services, products and outcomes provided to a person will play an important role in the determination of such liability. Among the questions that remain unanswered, could autonomous systems using artificial intelligence applications be “personally” held liable at some point? And how are we going to deal with potential legal loopholes endangering the rights and obligations of all parties interacting with artificial intelligence? In January 2017, the Committee on Legal Affairs of European Union (“EU Committee”) submitted a motion to the European Parliament which calls for legislation on issues relating to the rising of robotics. In the recommendations of the EU Committee, liability law reform is raised as one of the crucial issues. It is recommended that “the future legislative instrument should provide for the application of strict liability as a rule, thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behavior of a robot and the damage suffered by an injured party”.6 The EU Committee also suggests that the European Parliament considers implementing a mandatory insurance scheme and/or a compensation fund to ensure the compensation of the victims. What is next on the artificial intelligence front? While scientists are developing artificial intelligence at a speed faster than ever in many different fields and sciences, some areas of the law may need to be adapted to deal with associated challenges. It is crucial to be aware of the legal risks and to make informed decisions when considering the development and use of artificial intelligence. Artificial intelligence will have to learn to listen, to appreciate and understand concepts and ideas, sometimes without any predefined opinions or beacons, and be trained to anticipate, just like human beings (even if some could argue that listening and understanding remain difficult tasks for humans themselves). And at some point in time, artificial intelligence developments will get their momentum when two or more artificial intelligence applications are combined to create a superior or ultimate artificial intelligence system. The big question is, who will initiate such clever combination first, humans or the artificial intelligence applications themselves? John McCarthy, What is artificial intelligence?, Stanford University. Disruptive technologies: Advances that will transform life, business, and the global economy, McKinsey Global Institute, May 2013. Alex Hern, Stephen Hawking: AI will be “either best or worst thing” for humanity, theguardian. Engene Borukhovich, How will artificial intelligence change healthcare?, World Economic Forum. Nevada Administrative Code Chapter 482A-Autonomous Vehicles, NAC 482A.010. Committee on Legal Affairs, Draft report with recommendations to the Commission on Civil Law Rules on Robotics, article 27. (2015/2103 (INL))

    Read more
  • Artificial intelligence: contractual obligations beyond the buzzwords

    Can computers learn and reason? If so, what are the limitations of the tasks that they can be given? These questions have been the subject of countless debate as far back as 1937, when Alan Turing published his work on computable numbers1. Many researchers have devoted themselves to developing methods that would allow computers to interact more easily with human beings and integrate processes used to learn from the situations encountered. Generally speaking, the aim was to have computers think and react like a human being would. In the early 1960s, Marvin Minsky, a noted MIT researcher, outlined what he regarded as the steps along the path to artificial intelligence2. The power of the latest computers and the capacity to store phenomenal amounts of information now allow for artificial intelligence to be integrated in business and daily life, using processes known as “machine learning”, “data mining” or “deep learning”, the last of which has undergone rapid development in recent years3. The use of artificial intelligence in business raises many legal issues that are of crucial importance when companies enter into contracts respecting the sale or purchase of artificial intelligence products and services. From a contractual perspective, it is important to properly frame the obligations and expectations of each party. For suppliers of artificial intelligence products, a major issue is their liability in the event of product malfunctions. For example, could the designers of an artificial intelligence system used as an aid in making medical decisions be held liable, directly or indirectly, for a medical mistake resulting from erroneous information or suggestions given by the system? It may be appropriate to ensure that such contracts expressly require that the professionals using such systems maintain control over the results, regardless of the context in which the system is operating, be it medical, engineering or business management. In return, companies wishing to use such products must clearly frame their targeted objectives. This includes not only a stated performance objective for the artificial intelligence system, but also a definition of what would constitute product failure and the legal consequences thereof. For example, in a contract for the use of artificial intelligence in production management, is the objective to improve performance or reduce specific problems? And what happens if the desired results are not achieved? Another major issue is the intellectual property of the data integrated and generated by a particular artificial intelligence product. Many artificial intelligence systems require the use of a large volume of the company’s data for such systems to acquire the necessary learning “experience”. However, who owns that data and who owns the results what the artificial intelligence system has learned? For example, for an artificial intelligence system to become effective, a company would have to supply an enormous quantity of data and invest considerable human and financial resources to guide its learning. Does the supplier of the artificial intelligence system acquire any rights to such data? Can it use what its artificial intelligence system learned in one firm to benefit its other clients? In extreme cases, this would mean that the experience acquired by a system in a particular company would benefit its competitors. Where the artificial intelligence system is used in applications targeting consumers or company employees, the issues related to confidentiality of the data used by the artificial intelligence system and protection of the privacy of such persons should not be overlooked. The above are some of the contractual issues that must be considered and addressed to prevent problems from arising. Lavery Legal Lab on Artificial Intelligence (L3AI) We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations. For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law. In our following publications, the members of our Lavery Legal Lab on Artificial Intelligence (L3AI) will more specifically analyze certain applications of artificial intelligence in various sectors and industries. Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London mathematical society, 2(1), 230-265. Minsky, M. (1961). Steps toward artificial intelligence. Proceedings of the IRE, 49(1), 8-30. See: LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.

    Read more
  • Artificial Intelligence and the 2017 Canadian Budget: is your business ready?

    The March 22, 2017 Budget of the Government of Canada, through its “Innovation and Skills Plan” (http://www.budget.gc.ca/2017/docs/plan/budget-2017-en.pdf) mentions that Canadian academic and research leadership in artificial intelligence will be translated into a more innovative economy and increased economic growth. The 2017 Budget proposes to provide renewed and enhanced funding of $35 million over five years, beginning in 2017–2018 to the Canadian Institute for Advanced Research (CIFAR) which connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics including artificial intelligence and deep learning. These measures are in addition to a number of interesting tax measures that support the artificial intelligence sector at both the federal and provincial levels. In Canada and in Québec, the Scientific Research and Experimental Development (SR&ED) Program provides a twofold benefit: SR&ED expenses are deductible from income for tax purposes and a SR&ED investment tax credit (ITC) for SR&ED is available to reduce income tax. In some cases, the remaining ITC can be refunded. In Québec, a refundable tax credit is also available for the development of e-business, where a corporation mainly operates in the field of computer system design or that of software edition and its activities are carried out in an establishment located in Québec. This 2017 Budget aims to improve the competitive and strategic advantage of Canada in the field of artificial intelligence, and, therefore, that of Montréal, a city already enjoying an international reputation in this field. It recognises that artificial intelligence, despite the debates over ethical issues that currently stir up passions within the international community, could help generate strong economic growth, by improving the way in which we produce goods, deliver services and tackle all kinds of social challenges. The Budget also adds that artificial intelligence “opens up possibilities across many sectors, from agriculture to financial services, creating opportunities for companies of all sizes, whether technology start-ups or Canada’s largest financial institutions”. This influence of Canada on the international scene cannot be achieved without government supporting research programs and our universities contributing their expertise. This Budget is therefore a step in the right direction to ensure that all the activities related to artificial intelligence, from R&D to marketing, as well as design and distributions, remain here in Canada. The 2017 budget provides $125 million to launch a Pan-Canadian Artificial Intelligence Strategy for research and talent to promote collaboration between Canada’s main centres of expertise and reinforce Canada’s position as a leading destination for companies seeking to invest in artificial intelligence and innovation. Lavery Legal Lab on Artificial Intelligence (L3AI) We anticipate that within a few years, all companies, businesses and organizations, in every sector and industry, will use some form of artificial intelligence in their day-to-day operations to improve productivity or efficiency, ensure better quality control, conquer new markets and customers, implement new marketing strategies, as well as improve processes, automation and marketing or the profitability of operations. For this reason, Lavery created the Lavery Legal Lab on Artificial Intelligence (L3AI) to analyze and monitor recent and anticipated developments in artificial intelligence from a legal perspective. Our Lab is interested in all projects pertaining to artificial intelligence (AI) and their legal peculiarities, particularly the various branches and applications of artificial intelligence which will rapidly appear in companies and industries. The development of artificial intelligence, through a broad spectrum of branches and applications, will also have an impact on many legal sectors and practices, from intellectual property to protection of personal information, including corporate and business integrity and all fields of business law. In our following publications, the members of our Lavery Legal Lab on Artificial Intelligence (L3AI) will more specifically analyze certain applications of artificial intelligence in various sectors and industries.

    Read more
  • Public display of trade marks in a language other than French – Coming into force of the regulatory amendments

    On May 4, 2016, a draft regulation amending the Regulation respecting the language of commerce and business was published in the Gazette officielle du Québec (see our bulletin on this subject). On November 3, 2016, the Quebec government announced that the amendments to the Regulation respecting the language of commerce and business (the “Regulation”) will come into force on November 24, 2016. It must be noted that these changes are as announced in the May 2016 draft regulation. The purpose of the amendments is to ensure the presence of French when a trade mark in a language other than French is displayed outside a business, as currently allowed under section 25 (4) of the Regulation. The amendments do not go as far as requiring trade mark owners to translate them. However, “sufficient presence of French” required pursuant to the new section 25.1 of the Regulation may consist in the addition of a slogan, a generic term or a description of the goods and services. Without it being necessary that the additional display be at the same location as the trade mark, it must however give French permanent visibility, similar to that of the mark and be legible “in the same visual field” as that covered by it. However, the Regulation specifies that the French elements which must be added are not required to be the same size as the mark. As to the period provided for compliance with the new standards, businesses whose existing display contravenes the amended Regulation must bring it into conformity by no later than November 24, 2019. However, any installation of a new display or any replacement of a trade mark display from November 24, 2016 will have to conform to the new requirements.

    Read more
  • Public display of trade marks in a language other than French

    In 20141, major retailers Best Buy Canada Ltd., Costco Wholesale Canada Ltd., Gap (Canada) Inc., Old Navy (Canada) Inc., Guess? Canada Corporation, Wal-Mart Canada Corp., Toys “R” Us Canada Ltd. and Curves International Inc. had filed a motion for declaratory judgment before the Superior Court for determining the issue of whether a trade mark in the English language, without a registered French version, used for public display and in commercial advertising was required to be accompanied by a descriptive (generic) term in the French language in order to comply with the Charter of the French Language (“Charter”) and the Regulation respecting the language of commerce and business (“Regulation”). Mr. Justice Michel Yergeau of the Superior Court had come to the conclusion that the public display of trade marks in a language other than French complied with the Charter and the Regulation provided that no French version of the trade mark was registered. The Attorney General had appealed the decision. On April 27, 2015, the Court of Appeal of Quebec2 had dismissed from the bench the appeal of the Attorney General of Quebec. Minister Hélène David reacted following the Court of Appeal verdict, by promising the adoption of a regulation to ensure the presence of French language on the storefront of businesses. On May 4, 2016, a draft regulation amending the Regulation to amend the Regulation respecting the language of commerce and business was published in the Gazette Officielle du Québec. The Minister responsible for the Protection and Promotion of the French Language, Luc Fortin, describes the draft regulation as a solution which “preserves the integrity of trade marks”. The proposed amendments consist in the addition of the new sections 25.1 to 25.5 to the Regulation, which aim to ensure the presence of French when a trade mark in a language other than French is displayed outside a business, such as currently allowed under paragraph 25.4 of the Regulation. However, holders of trade marks will neither be required to translate their marks nor to insert a generic term in the French language such as “store” or “café” in them, although some have already done it on a voluntary basis. Under the new section 25.1 of the Regulation, merchants will henceforth be required to ensure “sufficient presence of French” on the site. This may consist in a slogan, a generic term, a description of their products and services or any other term or indication. Without the additional display being required to be present on the same place as the trade mark, it will however be required to give French permanent visibility, similar to that of the trade mark and be legible “in the same visual field” as that covered by the trade mark. It is however to be noted that since the Regulation does not specify a precise size for the French items which are required to be added, such items will not be required to be predominant relative to the mark. Businesses whose current display does not comply with the new requirements under the Regulation will be required to comply within three years from the date the new provisions come into force. However, any installation of a new display or replacement of the display of a trade mark from the date on which the amended Regulation comes into force will be required to comply with the new requirements.   Magasins Best Buy ltée c. Québec (Procureur général), 2014 QCCS 1427 (CanLII). Québec (Procureure générale) c. Magasins Best Buy ltée, 2015 QCCA 747 (CanLII).

    Read more
1 2 3 4 5