Artificial intelligence (AI) technologies open the door for innovation across nearly all sectors, but at the same time, they bring new and serious challenges regarding the protection of personal data.

In the light of these challenges, the European Data Protection Board (EDPB) adopted, in December 2024, a binding opinion addressing certain significant aspects of the processing of personal data during the development and deployment phases of AI models (hereinafter: the “Opinion“). To understand the overall context, it should be immediately pointed out that the Opinion was adopted at the initiative of the Irish DPA, who oversees the compliance of AI models developed by large tech companies such as Google, Meta, and X, and in this regard, it aims to provide guidelines on the processing of personal data in the context of the development and deployment of AI models, which are classified as part of a specific subgroup of models – generative AI models.

In this sense, the Opinion is of particular importance for all companies which are developing AI models or using AI models in business operations, and it provides a framework for understanding and assessing the compliance of AI models with the regulations governing the protection of personal data. When developing an AI model through the training process, during which the model learns from a dataset containing personal data to perform its intended function, it is crucial to establish appropriate legal basis for data processing.

The EDPB Opinion emphasizes that it is important to determine when an AI model can be considered anonymous and which measures must be taken to avoid risks of unlawful data processing. It also reminds of the obligation to assess the legitimate interests of the controller and to comply with the principles of proportionality and data minimization during the deployment and development phases of AI systems.

The three key topics addressed in the EDPB Opinion:

Anonymity of AI Models

One of the key issues considered by the EDPB is whether an AI model trained on personal data can be considered anonymous in all situations. AI models can only be considered anonymous in cases where the risk of identifying individuals is insignificant. The anonymity of an AI model will be assessed on a case-by-case basis, evaluating the likelihood that personal data can be extracted from the AI model, as well as the likelihood that such data could be obtained through queries.

In this regard, supervisory authorities will require AI model developers to conduct a detailed risk analysis concerning the potential for identification. This analysis should, among other defined methods, include an assessment of whether appropriate measures have been taken during the development and deployment phases of the AI model to prevent or limit the collection of personal data or to apply protections against attacks.

Legitimate interest as a legal basis for data processing

Since the GDPR (as well as serbian Law on the protection of personal data) does not establish a hierarchy among the prescribed legal basis, controllers are obligated to identify the appropriate legal basis for processing personal data during the development and deployment phases of AI models, and the Opinion suggests that, apparently, the most suitable legal basis would be legitimate interest.

To prove the existence of legitimate interest, the controller must conduct a three-step test, with the first step requiring (1) the identification of this interest as lawful, real, and clearly articulated. The remaining two steps of the test include (2) a necessity test, where it must be established that data processing is necessary to achieve the purpose pursued, and that there is no less intrusive way to achieve that purpose, and (3) a balancing test, where the rights and interests of all individuals whose personal data is being processed must be considered, including potential risks to fundamental rights, such as risks to freedom of expression when AI models block content publication.

It has been pointed out in the Opinion, in the context of the development and application of AI models, the importance of the concept of the data subject reasonable expectations, as it is often difficult for subjects to be aware of how their data is being used, due to the complexity of technologies and the ways in which data is collected and processed. Therefore, transparency regarding data processing becomes crucial, and controllers must provide clear information about how and why the data is used.

Impact and consequences of unlawful data processing

During the development of an AI model, unlawful data processing may occur. In such cases, it is important to assess how unlawful initial processing may impact the legality of subsequent processing. The EDPB has made distinction between three potential scenarios, but despite this, the Opinion indicates that, under certain circumstances, unlawful initial processing may not affect the legality of further processing activities, if the data is anonymized, or the model’s implementation is in compliance with the GDPR.

***

We would conclude this article by stating that the Opinion affirms the obligation of controllers to consider all aspects of data protection – from anonymization and legitimate interest to the potential consequences of unlawful processing and the expectations of data subjects, and at the same time, it directs supervisory authorities to carefully consider the specific circumstances of each individual case, taking into account the data protection principles prescribed in the GDPR, in order to ensure compliance with EU legislation and to protect the rights and freedoms of data subjects.

If you have any questions regarding the Opinion and/or you are interested in topics related to the regulatory framework for the development and responsible use of artificial intelligence and the protection of personal data, feel free to contact us at the following addresses marko.trisic@vmtlaw.rs, ana.popovic@vmtlaw.rs i nikolina.loncar@vmtlaw.rs.