LinkedIn recently faced backlash for training AI models using user data without timely transparency in updating its privacy policy. U.S. users can opt out of LinkedIn’s “content creation AI models,” but this option doesn’t apply to users in the EU due to stricter data regulations. While LinkedIn has now updated its privacy policy, concerns remain about the delayed disclosure and ethics of collecting data by default for AI training.
LinkedIn uses user data to train models for writing suggestions and post recommendations. It also shares this data with third-party models, like those from its parent company, Microsoft. Although LinkedIn claims to use privacy-enhancing techniques to reduce personal information in these datasets, critics argue that the opt-out mechanism is insufficient. Privacy groups, like the Open Rights Group, have called on regulators to investigate LinkedIn and other platforms for using user data without explicit consent.
This issue reflects a broader trend across tech platforms. Sites like Reddit and Stack Overflow have also begun using user content to train AI models, raising similar privacy concerns. Many believe that an opt-in system should be required for such practices, giving users more control over their data.
As demand for AI continues to rise, LinkedIn’s practices have raised serious questions about data privacy, consent, and how companies should handle sensitive user information. With privacy advocates calling for more action, LinkedIn faces growing pressure to rethink its approach to AI and user data.