On June 5, Meta Platforms received 11 complaints concerning proposed changes to how it uses personal data for training its artificial intelligence (AI) models without obtaining consent, which could violate European Union privacy regulations.
The privacy advocacy group None of Your Business (NYOB) called on national privacy watchdogs to take immediate action to stop Meta’s changes. The 11 complaints were filed in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain.
The complaint claims
The complaints claimed that Meta’s recent privacy policy changes, effective June 26, would allow the company to utilize years of personal posts, private images and online tracking data for its AI technology.
Because of the imminent changes, NOYB asked data protection authorities in the 11 countries to launch an urgent review.
According to a statement from NYOB, Meta’s recently updated privacy policy cites a legitimate interest in using users’ data to train and develop its generative AI models and other AI tools, which can be shared with third parties.
The policy change impacts millions of European users, preventing them from removing their data once it is in the system.
Related: Meta’s AI chief slams Elon Musk over hype, conspiracy theories
NOYB has previously filed several complaints against Meta and other Big Tech companies over alleged breaches of the EU’s General Data Protection Regulation (GDPR), which threatens fines of up to 4% of a company’s total global turnover for violations.
European court ruling ignored by Meta
Max Schrems, founder of NOYB, pointed out in a statement that the European Court of Justice already made a landmark decision on this issue in 2021, which should serve as a reference point for addressing Meta’s proposed use of personal data. He said:
“The European Court of Justice (CJEU) has already made it clear that Meta has no ‘legitimate interest’ to override users’ right to data protection when it comes to advertising… It seems that Meta is once again blatantly ignoring the judgments of the CJEU.”
Schrems argued that it’s entirely unreasonable to entrust users with the responsibility to protect their privacy. The law mandates that Meta obtain explicit consent from users rather than provide a hidden and misleading opt-out option.
Related: EU launches probe into Apple, Google, Meta over violation of Digital Markets Act
He emphasized that if Meta wants to use users’ data, it must ask for permission directly. Instead, Meta has made users request to be excluded from data usage, which is inappropriate.
In July 2023, Google was sued on similar grounds after it updated its privacy policy. The lawsuit accused the company of misusing large amounts of data, including copyrighted material, in its AI training.
Magazine: Make meth, napalm with ChatGPT, AI bubble, 50M deepfake calls: AI Eye