For artificial intelligence and machine learning technologies to advance, a large amount of data is needed which may include personal data. Although the which AI Act entered into force on 1st August 2024 lays down a legal framework for the development and use of artificial intelligence system, when personal data is concerned, we cannot forget about being compliant with good old GDPR. Do you know how to apply the AI Act and the GDPR together to ensure compliance? Let’s read this short summary to find out.

1. What is the scope of the AI Act and the GDPR?

As mentioned in the introduction, the AI Act[i] applies to the development, placing on the market, the putting into service and the use of AI systems. And the GDPR[ii] is applicable to any processing of personal data regardless of the technical device used.

This means that the GDPR and the AI Act have a common set as in case personal data is used to develop an AI system as a training data or the personal data is processed during the deployment of an AI system, both the GDPR and AI Act will be applicable.

When it comes to the personal scope of the above legislations, the targeted actors of the AI Act are mainly the providers and the deployers of AI systems. Providers and deployers subject to the AI Act however may be controllers and processors in the sense of the GDPR i.e. they will be responsible for the lawful processing of personal data.

2. How do you know which regulation is applicable?

Since the scope of the AI Act and the GDPR is different as discussed above, when designing a process related to an AI system, it is necessary to decide which law applies to be able to ensure compliance. The below table, which shows the possible scenarios, will help you in this.

Scenario Example AI Act / GDPR applicable?
Personal data is required for the development and / or the deployment of an AI system or model A headhunter company uses an AI system for automatic resume sorting

 

A chatbot is used to direct a person to the right doctor based on his medical complaints

Both the AI Act and the GDPR apply
Personal data is required neither for the development nor for the deployment of an AI system or model An AI system which only works with product data is used for logistics planning and warehouse management Only the AI Act applies
Personal data is used to develop or deploy an AI system that is not subject to the AI Act Medical data of natural persons is used for an AI system that is solely developed and put into service for scientific research purposes Only the GDPR applies
Personal data is not required to develop or deploy an AI system that is not subject to the AI Act An AI system working with weather data is used for simulation in the military Neither the AI Act, nor the GDPR apply

 

3. How the enforce the purpose limitation principle when developing AI systems?

To be compliant with the GDPR, personal data used for the development and training of an AI system must be collected for specified and legitimate purposes.

The training of AI systems and models often takes place by using big data originally collected for other purposes. In this case the provider as a controller needs to consider whether the purpose of training the AI is compatible with the original purpose of processing. According to the GDPR scientific purpose shall be regarded as compatible with the initial purpose of processing which means that the provider might re-use the data for the development of the AI system.

Besides considering whether the processing of the personal data during the training phase has a legitimate purpose, the provider shall ensure that there is legal basis in accordance with Article 6 and, in case of special categories of personal data, Article 9 of the GDPR for the processing. It is important to mention that scientific research in itself does not constitute a legal basis for processing.

Often, the appropriate legal basis may be the legitimate interest of the provider or a third party and in this case the provider shall evidence in a legitimate interest assessment test that the pursued legitimate interest overrides the privacy related rights and freedoms of the data subjects.

4. How to reconcile the data minimisation principle with the need for big data?

On the one hand, the usage of large amounts of data is usually necessary for the development of AI systems based on machine learning methods. On the other, the data minimisation principle of the GDPR requires controllers to use personal data in an adequate, relevant and limited manner.

To comply with the data minimisation principle, it is necessary to determine the categories of personal data needed for the training phase. For example, for the development of an AI system which is used to identify explanatory variables for an illness which can only occur in the male population, the usage of the medical records of the entire active patient population appears to be disproportionate.

If possible, it is recommended to use pseudonymised data or even a mock database (data with the same structure but not linked to a natural person) to train the AI. As a mock database may be considered as anonymisation, it might be the case that the GDPR will not even apply.

5. How to inform data subject whose data is used by AI?

AI systems are often opaque meaning that when an algorithm produces a certain result, in many cases, it is not possible to determine the logical steps taken. Nevertheless, the GDPR requires the controllers to inform data subjects about the processing operation in question in a concise, transparent and understandable way.

The provision of appropriate information and transparency are of paramount importance when automated individual decision-making is performed by an AI system or model. In this case, the controller must provide meaningful information about the logic involved, as well as the consequences of the data processing for the data subject.

For example, during an inspection of a platform for educational purposes, the CNIL the French GDPR watchdog found that there was a lack of information on the use of an algorithm and how it worked to rank and assign students to higher education establishments. Thus, the CNIL ordered to stop the taking of decisions with legal effects for individuals solely on the grounds of such automated data processing.

 

Based on the above, market players who provide AI systems using personal data shall think over their processes to be complaint both with the AI Act and the GDPR.

This article draws heavily on the information material published by the CNIL and available here: https://cnil.fr/fr/technologies/intelligence-artificielle-ia