LinkedIn faces lawsuit amid claims it shared users’ private messages to train AI models


In a US lawsuit filed on behalf of LinkedIn Premium users, the professional networking app has been accused of using members’ private messages to train AI models.

Filed in a California federal court, the lawsuit, on behalf of LinkedIn user Alessandro De La Torre, accuses the company of breaching its contractual promises by disclosing Premium customers’ private messages to third parties to train generative AI models.

“Given its role as a professional social media network, these communications include incredibly sensitive and potentially life-altering information about employment, intellectual property, compensation, and other personal matters,” the filing reads.

“Microsoft is the parent company of LinkedIn, and Defendant claims it disclosed its users’ data to third-party ‘affiliates’ within its corporate structure, and in a separate instance, more cryptically to ‘another provider’. LinkedIn did not have its Premium customers’ permission to do so.”

In a statement given to ITPro, a spokesperson for LinkedIn said: “These are false claims with no merit.”

The story behind the LinkedIn lawsuit

The case hinges on the introduction of a change to LinkedIn’s privacy practices last year, whereby users were opted in by default to allow third parties to use their personal data to train AI.

To start with, according to the lawsuit, this was done on the quiet, with the change only appearing in the company’s privacy policy in September following a backlash from users and privacy campaigners.

The company exempted customers in Canada, the EU, EEA, the UK, Switzerland, Hong Kong, and Mainland China from the data sharing – but not those in the US.

“Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide),” the company said in its FAQs.

The change wasn’t universally welcomed, however, with the UK’s Information Commissioner’s Office (ICO) noting that the opt-out approach wasn’t sufficient to protect user privacy.

Digital rights campaigners Open Rights Group also complained the opt-out model “proves once again to be wholly inadequate” to protect user rights.

The lawsuit is asking for compensation of $1,000 per Premium user for alleged violations of the US federal Stored Communications Act, along with an unspecified additional sum for breach of contract and the breach of California’s Unfair Competition Law (UCL).

It also calls for the company to delete all AI models trained using improperly collected data.


Source link
Exit mobile version