Meta sued over smart glasses privacy claims — 6 changes you should make right now

Last month, I had a lot of fun watching the Super Bowl and translating the Halftime show in real-time while wearing my Ray Ban Meta Display smart glasses. But like others who either own a pair or are thinking about buy them, a new report has understandably set off alarm bells.
A joint investigation by two Swedish newspapers found that human contractors in Nairobi, Kenya, are reviewing footage recorded by the glasses, including some deeply private moments: people undressing, using the bathroom and more.
TL:DR
Some footage from Meta Ray-Ban glasses is reviewed by human contractors as part of AI training. Users have very limited ability to opt out. The risks are real — but they’re also specific. This is not the first time users have raised privacy concerns. Here’s what you can actually do:
- Check your privacy settings
- Disable cloud processing for photos and videos
- Understand the voice recording situation
- Disable “Hey Meta” if you don’t use it
- Be mindful about when you use AI features
- Don’t leave the glasses on or recording unattended
Understanding the privacy concern starts with understanding what the glasses actually capture and send — and what they don’t.
The glasses are not always recording. They only activate when you tap the camera button or trigger a voice interaction with the “Hey Meta” wake word. That said, here’s what happens once you do. First and foremost, it’s important to know that when someone with Meta glasses is recording, you will see a light on the actual glasses.
Photos and videos you take are stored on your phone by default. They are only sent to Meta’s servers if you actively share them with Meta AI, upload them to Facebook or Instagram, or turn on cloud processing in settings.
Voice recordings triggered by “Hey Meta” are a different story. Since a policy update in April 2025, these are stored in Meta’s cloud by default — and you can no longer opt out of that storage. Meta says recordings are kept for up to one year to improve its AI products.
Any visual content you share with the Meta AI assistant — by asking it to analyze what it sees, for example — is also eligible for use in AI training and product improvement.
In short: photos and videos stay local unless you share them. But your voice interactions are going to Meta’s servers no matter what, and there’s currently no way to stop that.
This is the part of the story that has shocked most people, and understandably so.
Meta, like most major AI companies, uses human contractors — called data annotators — to review and label footage as part of training its AI models. It’s a standard industry practice, but it requires real humans to watch real footage, and that footage doesn’t always get filtered before it reaches them.
According to the reports, contractors at Sama, a Kenyan subcontracting firm, say some of the footage they’re asked to review includes:
- People using the bathroom or changing clothes
- Users’ bank card details captured mid-transaction
- Sexual content, either viewed or recorded by the wearer
- Footage of people in their bedrooms, captured after a wearer set down their glasses without turning them off
One contractor told the Swedish papers: “In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording.”
Why does this happen? Because when users interact with Meta AI — saying “Hey Meta, what am I looking at?” or asking it to analyze a scene — that footage can be flagged and sent for human review. The content isn’t being recorded behind users’ backs; it’s footage that users themselves triggered, but often without realizing it would be seen by a human being overseas.
Meta’s own terms of service do allow for this. They state that the company can “review your interactions with AIs” via “automated or manual (human) review.” But that language is buried deep, and most users have never read it.
Perhaps the most disturbing part of the investigation was former Meta employees confirming that the anonymization does not always work — faces sometimes remain visible to the Meta workers, particularly in difficult lighting conditions.
Fact vs. fiction: Clearing up the confusion
A lot of misinformation has spread alongside this story. Here’s what’s true and what isn’t.
The claim: Meta is constantly recording everything through the glasses
The reality: Only footage you actively share with Meta AI — by using voice commands or asking the AI to analyze a scene — is sent to Meta’s servers and potentially reviewed. The glasses aren’t always recording. But when you use AI features, that data can reach human reviewers — something most users don’t realize.
The claim: You can fully opt out of data collection
The reality: Voice recordings are stored in Meta’s cloud by default with no opt-out. They can be kept for up to one year.
Since April 2025, Meta removed the ability to opt out of voice recording storage. You can delete recordings manually, but you can’t stop the initial collection.
The claim: Only automated systems review your footage — no humans see it
The reality: Footage shared with Meta AI can be reviewed by human contractors overseas, as explicitly permitted in Meta’s AI terms of service. Meta’s own terms of service allow for human review of AI interactions. The Swedish investigation confirmed this is happening.
The claim: Any photo or video taken with the glasses automatically goes to Meta.
The reality: Photos and videos you take stay on your phone unless you actively share them with a Meta service. The glasses are not always-on surveillance cameras. Media your record stays local unless you share it. The concern centers on AI voice interactions and what happens when you actively use the AI assistant features.
What you can do right now
If you own a pair of Meta Ray-Ban smart glasses, here are concrete steps you can take to reduce your exposure.
- Check your privacy settings. Open the Meta View app, go to Settings > Privacy, and review what data sharing options you have enabled. Turn off anything you didn’t intentionally opt into.
- Disable cloud processing for photos and videos. In Settings, you can turn off cloud processing for media. This keeps photos and videos on your device rather than sending them to Meta’s servers.
- Understand the voice recording situation. You cannot opt out of voice recording storage — that option was removed in April 2025. However, you can manually delete your recordings at any time through the Meta AI app. Get in the habit of clearing them regularly.
- Disable “Hey Meta” if you don’t use it. If you’re not using the voice assistant features, disabling the wake word entirely is the most effective way to prevent voice data from being collected. You can still use the glasses for photos and calls without it.
- Be mindful about when you use AI features. Using Meta AI to analyze a scene — asking what something is, or getting real-time assistance — is when footage is most likely to be flagged for review. Think twice before using these features in private settings.
- Don’t leave the glasses on or recording unattended. Several of the most disturbing incidents described by contractors involved footage captured after the wearer set the glasses down without turning them off. Make it a habit to power down the glasses when you take them off.
According to the reports, a Meta spokesperson offered a brief response: “When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy,” and directed reporters to those documents.
Meta has not disputed the findings of the investigation. The company’s privacy page notes that users can manage data sharing in settings, and its terms do acknowledge the possibility of human review — but it does not specify where that review takes place or who carries it out.
Digitpatrox has reached out to Meta for additional comment and will update this article if we receive a response.
The takeaway
Storires like this are not new and probably will continue as AI becomes more sophisticated and further integrated into our lives.
But what makes the Meta Ray-Ban situation particularly acute is the combination of factors: wearable cameras that can record without drawing obvious attention, AI features that trigger data sharing, human forgetfulness to turn off the product, inadequate user awareness and a near-total inability to opt out once you’ve chosen to use those features.
Taken together, those factors show how easily convenience can outpace caution when AI becomes part of the devices we wear every day.
Follow Digitpatrox on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Digitpatrox
Source link