The European Union's (EU) data protection task force has released its preliminary conclusions on OpenAI's compliance with privacy regulations concerning ChatGPT.
EU Data Taskforce Releases Preliminary Conclusions on ChatGPT
After over a year of analysis, the task force remains unsure about crucial legal matters, such as the fairness and lawfulness of OpenAI's data processing, leaving decisions pending.
The task force highlights that ChatGPT must have a valid legal basis for every stage of personal data processing, from data collection to output generation. It also expressed concerns about the risks associated with large-scale web scraping.
The task force's report noted that the public availability of such data does not exempt it from requiring explicit consent for processing. To rely on legitimate interests (LI) as a legal basis, OpenAI must prove necessity and conduct a balancing test, ensuring that data processing aligns with data subjects' rights.
The task force suggests that technical measures and clear criteria for data collection could mitigate privacy risks. They also recommend deleting or anonymizing personal data before the training stage.
READ NEXT : OpenAI Introduces ChatGPT-4o; New Smarter, Faster Chatbot Scarily Translates, Flirts, and Teaches Humans!
Privacy Risks of OpenAI
The task force also underscores that OpenAI cannot transfer privacy risks to users and must ensure data accuracy and transparency. It suggests providing clear information about ChatGPT's reliability and potential biases.
The task force also emphasized the importance of allowing users to exercise their data rights effectively. OpenAI currently limits users to blocking incorrect information rather than correcting it, which the task force finds insufficient.
Join the Conversation