Information, Privacy and Defamation

Overview

At Lavery, we were the first major law firm to anticipate, almost 30 years ago, the cardinal importance that information was acquiring in our society. Whether it involves the right of access to governmental information, the protection of personal information, cross-border data flow, the use of information technology, respect of privacy, reputation and personal image or the right to be forgotten, our seasoned lawyers in the information and privacy sector offer you a comprehensive perspective thanks to the depth and breadth of their expertise and the broad range of services they have to offer.

Over the years, Mr. Doray, Ad.E., and his team have represented numerous public and private organizations in matters relating to the confidential nature of documents, the validity of certain governmental decisions, reputation and privacy. They act as legal counsel for many large corporations, professional orders, public organizations and media companies in important cases regarding administrative and constitutional law. Furthermore, they have represented various clients in defamation and invasion of privacy law suits.

Services

  • Legal opinions and dispute resolutions: protection of privacy and personal information in public and private sectors in general and with regards to employer-employee privacy issues in particular (protection of personal information, video-surveillance, use of information technology, collection, use and communication of personal information prior to and during employment, etc.)
  • Legal opinions and dispute resolutions: access to provincial and federal governmental information
  • Legal opinions and dispute resolutions: solicitor-client privilege and litigation privilege
  • Legal opinions and dispute resolutions: freedom of the press, defamation, right to privacy and one's personal image, and protection of information sources
  • Compliance audits and risk management
  • Operational support in case of loss or theft of personal information
  • Representation before government and legislative bodies in matters relating to access to information and privacy policies and legislation
  • Relations with provincial and federal regulatory authorities, including the Commission d'accès à l'information, the Privacy Commissioner of Canada and the Office of the Information Commissioner of Canada
  • Protection of commercial, financial, technical and industrial information supplied to governments by businesses
  • Interpretation of the rules in matters relating to telemarketing and philanthropic solicitation
  • Class action suits in matters relating to the protection of personal information and privacy
  • Structuring online national and international commercial transactions
  • Management and organizational support for businesses and public bodies to ensure their compliance with the applicable rules governing the protection of personal information, namely when designing computer systems and Internet sites
  • Protection of privacy, personal image and reputation in connection with the use of new information technologies (Internet, social media and cloud computing)
  • Archiving and transferring documents electronically
  • Application of the new Canadian anti-spam legislation
  • Application of the Lobbying Transparency and Ethics Act and of the Lobbying Act
  • Application in Canada of the European directives in regards to personal data treatment, the U.S. Helms-Burton law, the Patriot Act and the ITAR
  • Advice relating to the right to be forgotten
  1. Data Anonymization: Not as Simple as It Seems

    Blind spots to watch for when anonymizing data Anonymization has become a crucial step in unlocking the value of data for innovation, particularly in artificial intelligence. But without a properly executed anonymization process, organizations risk financial penalties, legal action and serious reputational harm, with potentially significant consequences for their operations. Understanding the anonymization process What the law says Under Quebec’s Act respecting the protection of personal information in the private sector (the “Private Sector Act”) and the Act respecting Access to documents held by public bodies and the Protection of personal information (the “Access Act”), information concerning a natural person is considered anonymized if it irreversibly no longer allows the person to be identified directly or indirectly. Since anonymized information no longer qualifies as personal information, this distinction is of crucial importance. However, beyond this definition, neither Act provides details on how anonymization should actually be performed. To fill this gap, the government adopted the Regulation respecting the anonymization of personal information (the “Regulation”), which sets out the criteria and framework for anonymization, grounded in high standards of privacy protection. What organizations need to know before starting Under the Regulation, before beginning any anonymization process, organizations must clearly define the “serious and legitimate purposes” for which the data will be used. These purposes must comply with either the Private Sector Act or the Access Act, as applicable, and any new purpose must meet the same requirement. The process must also be supervised by a qualified professional with the expertise to select and apply appropriate anonymization techniques. This supervision ensures both the proper implementation of the chosen methods and the ongoing validation of technological choices and security measures. The four key steps of data anonymization   DepersonalizationThe first step is to remove or replace all personal identifiers, such as names, addresses and phone numbers, with pseudonyms. It is essential to anticipate how different data sets might interact, in order to minimize the risk of re-identifying individuals through cross-referencing. Preliminary risk assessmentNext comes a preliminary analysis of re-identification risks. This step relies on three main criteria: individualization (inability to isolate a person within a dataset), correlation (inability to connect datasets concerning the same person) and inference (inability to infer personal information from other available information). Common anonymization techniques include aggregation, deletion, generalization and data perturbation. Organizations should also apply strong protective measures, such as advanced encryption and restrictive access controls, to minimize the likelihood of re-identification. In-depth risk analysisAfter the preliminary phase, a deeper risk analysis must be conducted. While no anonymization process can eliminate all risk, that risk must be reduced to the lowest possible level, taking into account factors such as data sensitivity, the availability of public datasets and the effort required to attempt re-identification. To sustain this low level of risk, organizations should perform periodic reassessments that account for technological advances that could make re-identification easier over time. Documentation and record-keepingFinally, organizations must keep a detailed record describing the anonymized information, its intended purposes, the techniques and security measures used, and the dates of any analyses or updates. This documentation strengthens transparency and demonstrates that the organization has fulfilled its legal obligations regarding anonymization.

    Read more
  2. Businesses: Four tips to avoid dependency or vulnerability in your use of AI

    While the world is focused on how the tariff war is affecting various products, it may be overlooking the risks the war is posing to information technology. Yet, many businesses rely on artificial intelligence to provide their services, and many of these technologies are powered by large language models, such as the widely-used ChatGPT. It is relevant to ask whether businesses should rely on purely US-based technology service providers. There is talk of using Chinese alternatives, such as DeepSeek, but their use raises questions about data security and the associated control over information. Back in 2023, Professor Teresa Scassa wrote that, when it comes to artificial intelligence, sovereignty can take on many forms, such as state sovereignty, community sovereignty over data and individual sovereignty.1 Others have even suggested that AI will force the recalibration of international interests.2 In our current context, how can businesses protect themselves from the volatility caused by the actions of foreign governments? We believe that it’s precisely by exercising a certain degree of sovereignty over their own affairs that businesses can guard against such volatility. A few tips: Understand Intellectual property issues: Large language models underlying the majority of artificial intelligence technologies are sometimes offered under open-source licenses, but certain technologies are distributed under restrictive commercial licenses. It is important to understand the limits imposed by the licenses under which these technologies are offered. Some language model owners reserve the right to alter or restrict the technology’s functionality without notice. Conversely, permissive open-source licenses allow a language model to be used without time restrictions. From a strategic standpoint, businesses should keep intellectual property rights over their data compilations that can be integrated into artificial intelligence solutions. Consider other options: Whenever technology is used to process personal information, a privacy impact assessment is required by law before such technology is acquired, developed or redesigned.[3] Even if a privacy impact assessment is not legally required, it is prudent to assess the risks associated with technological choices. If you are dealing with a technology that your service provider integrates, check whether there are alternatives. Would you be able to quickly migrate to one of these if you faced issues? If you are dealing with custom solution, check whether it is limited to a single large language model. Adopt a modular approach: When a business chooses an external service provider to provide a large language model, it is often because the provider offers a solution that is integrated to other applications that the business already uses, or because it provides an application programming interface developed specifically for the business. In making such a choice, you should determine whether the service provider can replace the language model or application if problems were to arise. If the technology in question is a fully integrated solution from a service provider, find out whether the provider offers sufficient guarantees that it could replace a language model if it were no longer available. If it is a custom solution, find out whether the service provider can, right from the design stage, provide for the possibility of replacing one language model with another. Make a proportionate choice: Not all applications require the most powerful language models. If your technological objective is middle-of-the-road, you can consider more possibilities, including solutions hosted on local servers that use open-source language models. As a bonus, if you choose a language model proportionate to your needs, you are helping to reduce the environmental footprint of these technologies in terms of energy consumption.  These tips each require different steps to be put into practice. Remember to take legal considerations, in addition to technological constraints, into account. Licenses, intellectual property, privacy impact assessments and limited liability clauses imposed by certain service providers are all aspects that need to be considered before making any changes. This isn’t just about being prudent—it’s about taking advantage of the opportunity our businesses have to show they are technologically innovative and exercise greater control over their futures. Scassa, T. 2023. “Sovereignty and the governance of artificial intelligence.” 71 UCLA L. Rev. Disc. 214. Xu, W., Wang, S., & Zuo, X. 2025. “Whose victory? A perspective on shifts in US-China cross-border data flow rules in the AI era.” The Pacific Review, 1–27. See in particular the Act respecting the protection of personal information in the private sector, CQLR c. P-39.1, s. 3.3.

    Read more
  3. The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  4. Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  1. Lavery recognized in Best Lawyer's new directory of Canada's best law firms for 2025

    We are pleased to announce that Lavery has been recognized as one of the top law firms in the new edition of Best Law Firms - Canada published by Best Lawyers for 2025. Our firm was ranked in 16 practice areas nationally and 50 practice areas regionally. These recognitions are further demonstration of the expertise and quality of legal services that characterize Lavery's professionals. Firms on the 2025 Best Lawyers - Canada list are recognized for their professional excellence through evaluations by their clients and peers. Areas of expertise in which Lavery is recognized: Tier 1 Administrative and Public Law (National / Regional) Banking and Finance Law (Regional) Class Action Litigation (Regional) Commercial Leasing Law (Regional) Construction Law (Regional) Corporate and Commercial Litigation (Regional) Corporate Law (Regional) Family Law (National / Regional) Information Technology Law (Regional) Insolvency and Financial Restructuring Law (Regional) Insurance Law (National / Regional) Intellectual Property Law (Regional) Labour and Employment Law (Regional) Mergers and Acquisitions Law (Regional) Mining Law (Regional) Natural Resources Law (Regional) Product Liability Law (Regional) Securities Law (Regional) Trusts and Estates (Regional) Workers' Compensation Law (Regional) Tier 2 Administrative and Public Law (Regional) Advertising and Marketing Law (Regional) Alternative Dispute Resolution (Regional) Biotechnology and Life Sciences Practice (Regional) Class Action Litigation (National) Corporate Governance Practice (Regional) Corporate Law (National / Regional) Director and Officer Liability Practice (Regional) Energy Law (Regional) Environmental Law (Regional) Family Law (Regional) Health Care Law (Regional) Insurance Law (Regional) Intellectual Property Law (National) Labour and Employment Law (National) Professional Malpractice Law (Regional) Real Estate Law (Regional) Tax Law (Regional) Trusts and Estates (National) Tier 3 Aboriginal Law / Indigenous Practice (Regional) Alternative Dispute Resolution (Regional) Banking and Finance Law (National) Construction Law (National) Corporate and Commercial Litigation (National) Defamation and Media Law (Regional) Employee Benefits Law (Regional) Equipment Finance Law (Regional) Family Law Mediation (Regional) Health Care Law (Regional) Insolvency and Financial Restructuring Law (National) Mergers and Acquisitions Law (National) Mining Law (National) Privacy and Data Security Law (Regional) Private Funds Law (Regional) Professional Malpractice Law (Regional) Project Finance Law (Regional) Securities Law (National) Workers' Compensation Law (National) About Lavery Lavery is the leading independent law firm in Quebec. Its more than 200 professionals, based in Montréal, Quebec, Sherbrooke and Trois-Rivières, work every day to offer a full range of legal services to organizations doing business in Quebec. Recognized by the most prestigious legal directories, Lavery professionals are at the heart of what is happening in the business world and are actively involved in their communities. The firm’s expertise is frequently sought after by numerous national and international partners to provide support in cases under Quebec jurisdiction. About Best Lawyers Best Lawyers is the oldest and most respected lawyer ranking service in the world. For almost 40 years, Best Lawyers has assisted those in need of legal services to identify the lawyers best qualified to represent them in distant jurisdictions or specialized areas. Best Lawyers lists are published in leading local, regional, and national publications across the globe.  

    Read more
  2. Bernard Larocque appointed a Judge of the Superior Court of Québec

    We were very pleased to learn of the announcement of the Minister of Justice confirming the appointment of Bernard Larocque as a Judge of the Superior Court of Québec for the district of Montréal. The Superior Court of Québec is an ordinary court of law in Quebec hearing all disputes that a formal provision of law has not assigned to the jurisdiction of another court. The Superior Court plays a key role in Quebec's justice system. Bernard Larocque joined the firm in 1998 as a member of the litigation group and became a partner in 2003. His practice focused mainly on civil litigation, including defamation, insurance law, class actions, professional liability and administrative disputes. He has frequently appeared before the courts, including the Supreme Court of Canada and the Court of Appeal of Quebec.?His excellence and reputation as a litigator earned him the title of Fellow by the prestigious American College of Trial Lawyers in March 2020. Bernard has also always been active in the community and has been deeply involved with the Justice Pro Bono Board of Directors for over twenty years, which he has chaired since 2020. "Bernard will be serving on the bench with several of his former colleagues and friends from the firm. He embodies Lavery's values, driven by excellence, diligence, a deep sense of duty and a desire to give back to society. These are all qualities that will carry him through this next important chapter in his legal career," concludes Anik Trudel, CEO at Lavery.

    Read more
  3. Lavery assists Agendrix in obtaining two ISO certifications for data security and privacy

    On February 6, 2023, Agendrix, a workforce management software company, announced that it had achieved certification in two globally recognized data security and privacy standards, ISO/IEC 27001:2013 and ISO/IEC 27701:2019. This made it one of the first staff scheduling and time clock software providers in Canada to obtain these certifications. The company is proactively engaging in all matters related to the security and confidentiality of the data processed by its web and mobile applications. The ISO/IEC 27001:2013 standard is aimed at improving information security systems. For Agendrix’s customers, that means its products comply with the highest information security standards. ISO/IEC 27701:2019 provides a framework for the management and handling of personal information and sensitive data. This certification confirms that Agendrix follows best practices and complies with applicable laws. A Lavery team composed of Eric Lavallée, Dave Bouchard, Ghiles Helli and Catherine Voyer supported Agendrix in obtaining these two certifications. More specifically, our professionals assisted Agendrix in the review of their standard contract with their customers, as well as in the implementation of policies and various internal documents essential to the management of personal information and information security. Agendrix was founded in 2015, and the Sherbrooke-based company now has over 150,000 users in some 13,000 workplaces. Its personnel management software is a leader in Quebec in the field of work schedule management for small and medium-sized businesses. Agendrix’s mission is to make management more human-centred by developing software that simplifies the lives of front-line employees. Today, the company employs more than 45 people.

    Read more