Generative AI (genAI) is taking over the tech industry. From Microsoft’s genAI-assistant-turned-therapist Copilot being pinned to the Windows taskbar to Google’s Android operating system being “designed with AI at its core,” you can’t install a software update anymore without getting a new whizz-bang AI feature that promises to boost your productivity.
But, when you talk to AI, you’re not just talking to AI. A human might well look at your conversations, meaning they aren’t as private as you might expect. This is a big deal for both businesses working with sensitive information as well as individuals asking questions about medical issues, personal problems, or anything else they might not want someone else to know about.
Some AI companies train their large language models (LLMs) based on conversations. This is a common concern — that your business data or personal details might become part of a model and leak out to other people. But there’s a whole other concern beyond that, and it could be an issue even if your AI provider promises never to train its models on the data you feed it.
Source link