The U.K. Information Commissioner’s Office (ICO) has confirmed that LinkedIn has paused its data processing in the country for training AI models.
“We are pleased that LinkedIn has considered our concerns regarding the use of U.K. user data for training generative AI models,” said Stephen Almond, executive director of regulatory risk, said.
“We welcome LinkedIn’s decision to suspend such model training while we continue discussions with the ICO.”
Almond also noted that the ICO will closely monitor companies offering generative AI capabilities, including Microsoft and LinkedIn, to ensure they have proper safeguards in place to protect the information rights of U.K. users.
The development follows LinkedIn’s admission that it had used user data to train its AI models without explicit consent, as outlined in its updated privacy policy effective September 18, 2024, according to 404 Media.
“At this time, we are not enabling generative AI training on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not offer this setting to members in those regions until further notice,” LinkedIn stated.
In a separate FAQ, the company emphasized its efforts to “minimize personal data in training datasets,” utilizing privacy-enhancing technologies to redact or remove personal data from the training data.
Users outside of Europe can opt out of this practice by going to the “Data privacy” section in account settings and disabling the “Data for Generative AI Improvement” option. However, LinkedIn clarified that opting out will prevent future data use but will not affect data already used for training.
LinkedIn’s move to opt in all users for AI model training comes shortly after Meta revealed that it had been using non-private user data for similar purposes since 2007. Meta has since resumed training on U.K. user data.
In August, Zoom also abandoned plans to use customer content for AI training after concerns were raised about data usage under its revised terms of service.
These developments highlight increasing scrutiny over the use of personal data in AI model training, particularly regarding how such data is collected and processed.
This comes as the U.S. Federal Trade Commission (FTC) released a report stating that large social media and streaming platforms have engaged in extensive user surveillance, often with weak privacy controls and insufficient safeguards for children and teens.
The FTC noted that companies collected vast amounts of data, including from data brokers, on both users and non-users of their platforms. This data was often combined with information from AI, tracking pixels, and third-party sources to build detailed consumer profiles, which were then monetized.
“Companies collected and could indefinitely retain large volumes of data, including information from data brokers, on both users and non-users,” the FTC stated, adding that the data handling practices were “woefully inadequate.”
“Many companies engaged in broad data sharing, raising serious concerns about the adequacy of their data handling controls and oversight. Some companies did not delete all user data in response to deletion requests.”
0 Comments
No comments yet. Be the first to comment!
Post a comment