Back

Generative AI and Data Protection

After artificial intelligence has taken over the world, the attention of data protection authorities (DPAs) is focused on what happens to the user's data in the process of generating and using AI in different countries.

In this article, we will analyze what concrete steps the DPA is taking to make the field of AI more regulated in terms of data protection.

How DPAs de facto regulate AI

The Istanbul Bar Association's IT Law Commission recently published an article titled "How Data Protection Authorities De Facto Regulate Generative Artificial Intelligence" in the AI Working Group's monthly newsletter, “Law in the Age of Artificial Intelligence”. Why is this issue becoming increasingly acute and urgent?

Last year, Generative AI took the world by storm, with services like ChatGPT becoming the "fastest growing consumer app in history." Huge amounts of data, including personal data, are required for the learning and functioning of generative artificial intelligence applications. It's no surprise that data protection authorities (DPA) have been the first regulatory authorities around the world to take action, from launching investigations to issuing service suspension orders when they find breaches of data protection law.

Their concern extends to the following:

  • lack of legal basis for processing personal data used for training AI models,
  • lack of transparency regarding personal data used for training,
  • lack of transparency - how personal data is collected during user interaction with AI,
  • lack of ways to exercise the rights of data subjects, such as access, erasure, and objection,
  • inability to use the right to correct inaccurate personal data, insufficient data security measures,
  • illegal processing of personal data and data of children.

DPA investigations

Broadly speaking, DPAs are supervisory authorities empowered to enforce comprehensive data protection legislation in their jurisdiction. Over the past six months, as generative artificial intelligence has grown in popularity among consumers and businesses worldwide, DPAs have launched investigations into how providers of such services comply with legal obligations regarding how personal data is collected and used, as required by relevant national legislation on data protection. Their efforts are currently focused on OpenAI as a ChatGPT provider. Only two of the investigations have so far resulted in official law enforcement actions, albeit preliminary ones, in Italy and South Korea.

How does this happen?

On March 30, 2023, the Italian DPA (Garante) issued an emergency order to block OpenAI from processing personal data of people in Italy. The warrant outlined several potential violations of GDPR provisions, including legality, transparency, data subject rights, processing of children's personal data, and data protection by design and by default. It lifted the ban a month later, after OpenAI announced changes to comply with the DPA. The investigation is still ongoing.

Following the Italian order, on April 13, 2023, the European Data Protection Board established a working group to "facilitate cooperation and exchange of information" on complaints and investigations regarding OpenAI and ChatGPT at the EU level.

Canada's federal Office of the Privacy Commissioner (OPC) announced on April 4, 2023 that it had opened an investigation into ChatGPT following a complaint that the service was processing personal data without consent. On May 25, the OPC announced that it would investigate ChatGPT in conjunction with the provincial privacy authorities of British Columbia, Quebec and Alberta, expanding the investigation to also investigate whether OpenAI complied with its obligations regarding openness and transparency, access, accuracy and accountability, and purpose limitations.

The Ibero-American DPA Network, which brings together supervisory authorities from 21 Spanish- and Portuguese-speaking countries in Latin America and Europe, announced on May 8, 2023 that it had launched a coordinated action on ChatGPT.

Japan's Personal Information Protection Commission (PPC) issued a warning to OpenAI on June 1, 2023, which emphasized that it must not collect personal data from ChatGPT users or others without consent, and must provide notice in Japanese of the purpose, with which it collects personal data of users.

On July 27, 2023, Brazil's DPA announced that it had opened an investigation into ChatGPT's compliance with the Lei Geral de Proteção de Dados (LGPD) after receiving a complaint and media reports alleging that the service provided was non-compliant to the law of the country on data protection.

In July 2023, the US Federal Trade Commission (FTC) opened an investigation into ChatGPT to find out whether its provider used "unfair or deceptive practices to protect privacy or data security, or unfair actions related to risks of harm to consumers" in violation of Section 5 of the FTC Act.

South Korea's Personal Information Protection Commission (PIPC) announced on July 27, 2023 that it had imposed an administrative fine of 3.6 million Korean won (approximately US$3,000) on OpenAI for failing to report a data breach in communications with the payment procedure. At the same time, the PIPC published a list of cases of non-compliance with the country's Personal Data Protection Act related to transparency, legal grounds for processing (lack of consent), lack of clarity related to the controller-processor relationship, as well as problems with the lack of parental consent for the children 14 years old. PIPC gave OpenAI a month and a half before 15 September 2023 to bring the processing of personal data into line.

This review of investigations shows significant commonalities between legal obligations and their application to the processing of personal data through this new technology. There is also overlap in the DPA's concerns about the impact of generative artificial intelligence on people's rights over their personal data. This provides a good basis for cooperation and coordination between supervisory authorities as regulators of generative AI.

Statement G7

In this spirit of DPA, G7 members adopted a Statement on Generative Artificial Intelligence in Tokyo on 21 June 2023, which sets out their key concerns related to how the technology processes personal data. The commissioner began his statement by acknowledging that "there is growing concern that generative artificial intelligence may pose risks and potential harm to privacy, data protection, and other fundamental human rights if not properly developed and regulated."

The main problematic issues highlighted in the Application relate to:

  • the use of personal data at various stages of the development and deployment of artificial intelligence systems, in particular focusing on datasets used to train, validate and test generative AI models,
  • human interactions with generative AI tools, as well as content created by people.

For each of these stages, the question of the legal basis of processing was raised.

Security against inverting a generative AI model to extract or reproduce personal data originally processed in the datasets used to train the model was also added as a key issue. There were also proposals to implement mitigation and monitoring measures to ensure that personal data generated through such tools is accurate, complete and free from discriminatory, illegal or other undue influence.

Other issues of concern included:

  • transparency to promote openness and clarity;
  • production of technical documentation during the AI development life cycle;
  • technical and organizational measures to enforce the rights of individuals, such as access, erasure, rectification, and the right not to be subject exclusively to automated decision-making that has a significant impact on the individual;
  • accountability measures to ensure the appropriate level of responsibility in the AI supply chain;
  • limiting the collection of personal data to what is necessary to perform the specified task.

A key recommendation set out in the Statement, which also follows from the investigations above, is that developers and providers implement privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, and document their choices in the evaluation impact on data protection.

Subscribe to our channels in social networks:

LinkedIn
Instagram
Facebook
Telegram
Medium

Contact us:

business@avitar.legal

Authors:

Violetta Loseva

,

10.2.2023 16:00
Іконка хрестик закрити

Let's discuss your project

Application successfully sent
Request submission error
By clicking "Allow all" you agree to store cookies on your device to enhance website navigation, analyse usage and assist in our marketing efforts
Allow chosen

Submit

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
You can find more in our
Cookie Policy
Text Link
Data Protection