Google is planning to use “AI” in Chrome to detect scams when you browse random web pages.
As spotted by Leo on X, a new flag in Chrome Canary enables a feature that uses AI (called “LLM,” or Large Language Model) to analyze web pages on your device.
This feature helps you detect the brand and purpose (intent) of a webpage, making it easier to identify potential scams. It works on Mac, Windows, and Linux.
It’s unclear how the feature works, but it could possibly issue warnings when you visit an obvious scam website.
For example, if you visit a fake Microsoft tech support page claiming your computer is infected and urging you to call a number, Chrome’s AI could analyze the language, detect the scam tactics like fake urgency or suspicious domains, and display a warning alerting you to avoid interacting with the page or sharing personal information.
This new tool is being tested in Chrome Canary and could be related to Chrome’s built-in Enhanced Protection feature, which now also uses AI.
As previously reported by BleepingComputer, Google recently updated its Enhanced Protection feature in Chrome to include AI.
Chrome’s Enhanced Protection is now powered by AI.
Google says the updated Enhanced Protection feature uses AI to provide real-time protection against dangerous sites, downloads, and extensions.
Before October, Enhanced Protection didn’t use AI. It was described as “proactive protection,” but it has since been updated to “AI-powered protection.”
Google is likely using pre-trained data to understand web content and warn users about scams or dangerous sites.
The company is still testing these AI-powered security and privacy features in Chrome. It’s not clear when more details will be shared.
Source link