Skip to content

Privacy

EPDB Releases Opinion on AI Model Development

Why This Alert Is Important

The use of personal information in AI model development has sparked significant concerns around data protection and regulatory compliance. The European Data Protection Board (EDPB), an independent body tasked with ensuring consistent application of data protection laws across the EU, has emphasized the need for strict safeguards in this area.

Overview of the EDPB Opinion

AI model development often relies on large datasets, which can include personal information. While this data is invaluable in training algorithms, its use is subject to stringent data protection regulations such as GDPR and CCPA. Failure to ensure proper data use compliance can result in significant fines, reputational damage, and loss of consumer trust. 

On December 18, 2024, the European Data Protection Board (EDPB) released guidelines to address personal data protection in AI model training, in response to a request from the Irish Data Protection Commission for more guidance on how GDPR applies to personal data used in training large language models (LLMs). "AI technologies may bring many opportunities and benefits to different industries and areas of life. We need to make sure these innovations are done ethically, safely, and in a way that benefits everyone," EDPB Chair Anu Talus said in a statement. "The EDPB wants to support responsible AI innovation by ensuring personal data are protected and in full respect of the General Data Protection Regulation."

What the EDPB Opinion Contains

The European Data Protection Board (EDPB) emphasized a case-by-case approach when assessing anonymity in AI models. Regulators can examine risk assessments, test for vulnerabilities, and verify privacy-preserving measures. For a model to be considered anonymous, it must be highly unlikely to identify individuals or extract personal data through queries. 

The EDPB provided examples of anonymity methods and noted that developers may use legitimate interest as a legal basis for model training, but authorities must apply a three-step test to ensure its lawful use. This includes determining legitimate interest, necessity, and whether such processing respects the rights and freedoms of individuals. 

Transparency and balancing rights are critical, considering the complexity of AI technologies. If an AI model is developed using unlawfully processed data, its deployment’s legality may be impacted unless anonymization has been achieved. The EDPB is drafting additional guidelines to address specific challenges, including web scraping practices.

The European Data Protection Board's (EDPB) opinion on AI model development highlights the growing complexity of aligning AI innovation with stringent data protection laws like GDPR. AI models often require extensive datasets that may include personal data, introducing risks of misuse or breach. The EDPB's emphasis on a case-by-case evaluation of anonymity, the role of legitimate interest as a legal basis, and the criticality of privacy-preserving measures reflects the nuanced approach regulators are adopting. For organizations leveraging AI, this development underlines the need to implement robust data governance, ensure defensible processes, and adopt privacy-centric practices to reduce data risk.

Fahad Diwan, JD, FIP, CIPP/M, CIPP/C, Director of Product Marketing, Data Privacy, Security, and Governance, Exterro

Data Privacy Tip 

Privacy professionals have a leg up on other professionals in understanding how to put guardrails around your organization's artificial intelligence program. Learn 4 Keys to Using AI Responsibly in our whitepaper.

Ready to Get Started?

Get an Exterro data risk management platform demo today.

Get a Demo