Blog

Privacy settings, tips, and more

In a perfect world, we could ignore AI altogether, but it’s not that simple. Whether your job involves AI, you’re worried about your content being used for training, or you’re concerned about your kids using AI inappropriately, learning how to stay safe while using AI is crucial.

Not to mention, AI-driven scams have caused $108 million in damages in the US alone, with incidents doubling from July 2023 to July 2024, according to the FTC. Scammers use increasingly complex AI tactics (e.g., deepfakes and voice cloning) to impersonate and trick people into handing over sensitive data, so it pays to stay cautious.

In this guide, we’ll share 10 tips for using AI securely. We’ll cover everything from verifying apps, managing privacy settings, and spotting scams to ensuring AI content is reliable and accurate, talking to your kids about AI safety, and keeping up with new regulations.

How to stay safe while using AI

AI is becoming part of daily life at work, school, and home, so knowing how to stay safe is essential. Here are 10 useful tips to help you protect your privacy, avoid scams, and use AI responsibly—whether you’re interacting with AI yourself or talking to your kids about it.

1. Vet AI apps before use (fake apps and privacy policies)

With so many AI apps out there, it’s getting challenging to recognize what’s real anymore—and we’re not just talking about AI-generated images or videos. During the initial AI craze, Facebook and other platforms were bombarded with ads for fake ChatGPT apps or services like Gemini (formerly Google Bard), Jasper AI, and Midjourney.

Unsurprisingly, these apps were designed to install dangerous malware or steal user data through other means. Hackers bet on the fact that curious users will want to check out the popular new thing and trick people into downloading harmful apps.

What can you do about it?

Well, on an individual level, you’ll have to get better at spotting fake apps. Double-check app spelling on social media pages. Check user reviews and warnings on Reddit and elsewhere, and look up the official website of any tools you plan to use.

Even if you have the right app, scan its privacy policy to see how it stores and uses your data. Look for companies that are upfront not just about storage policies but also about how their AI works. Transparency is a good sign that they can be trusted.

Related: Is it safe to use ChatGPT?

In a workplace setting, we recommend sticking to company-approved AI apps. This ensures they meet security standards, comply with company policies, and offer the support required to safeguard your data.

2. Follow your workplace’s AI usage rules

Besides using only vetted AI services, it’s also important to follow company guidelines on using AI safely. Many organizations now provide training on safe AI practices, which can help employees understand potential risks and follow the best security protocols.

If you’re a business owner or manager, ensure you and your team know how to spot and report anything unusual when using AI. Regularly check the AI tools you use to ensure they’re still secure and adjust any policies as needed to keep up with new security risks.

3. Stay safe against AI scams

We’ve previously published a comprehensive guide on how to recognize AI scams, which covers AI-powered phishing scams, the malicious use of AI-generated images, deepfakes (or fake videos), and voice cloning.

We recommend checking out that guide for more in-depth tips for each category. But if you’re short on time, here are some good rules of thumb to follow:

  • Use a password manager: Even if hackers use AI to craft the most legit-looking phishing site to steal your logins, the best password managers won’t input your email, password, or payment data on anything but the real thing. Not to mention, they can create strong, unique passwords that are nigh impossible to crack.
  • Enable two-factor authentication (2FA): On the off-chance that an attacker gets a hold of your passwords (through a data breach, an AI phishing scam, or other methods), 2FA is an excellent safety net to protect your accounts.
  • Update your software and OS: Cybercriminals always look for new ways to exploit vulnerabilities, sometimes even using AI to automate or enhance attacks. Keeping your apps, operating system, and antivirus updated helps patch security holes before they can be used against you.
  • Report phishing scams: If an email seems fishy, it’s worth reporting it to your IT department, email or service provider, or even the authorities. That way, you ensure others won’t fall for the same scam.
  • Use a VPN and avoid public Wi-Fi: Secure VPNs encrypt your traffic, making it unreadable to hackers, your ISP, and other surveillance. Steering clear of unsafe public networks will also prevent your AI chats from being intercepted.

4. Manage your in-app privacy settings

ChatGPT, DeepSeek, and other AI tools let you opt out of having your data used to train their models. If you’re wondering how to stay safe while using AI, it’s worth digging through the in-app settings for better control over your data.

Each tool has a different way of doing things, so here’s a short guide on how to turn off ChatGPT training functions as an example:

  1. Open ChatGPT and click on your profile icon in the top-right.
  2. Next, click Settings.how to access ChatGPT settings
  3. From the General tab, you can Delete all chats. Note that these will still be kept up to 30 days after deletion to prevent abuse.how to delete all chats in ChatGPT
  4. Go to Personalization and toggle off Memory. Alternatively, you can Manage memories and delete anything too personal.how to turn off memories in ChatGPT
  5. Now, go to Data controls and select Improve the model for everyone.how to opt out of ChatGPT data collection
  6. Toggle off the setting (and the Voice mode options if applicable) and click Done.

Here’s how to turn off Grok training as a bonus:

  1. Open the desktop version of X/Twitter and click More.more settings on X/Twitter
  2. Next, click on Settings and privacy.settings and privacy on X/Twitter
  3. Select Privacy and safety to proceed.privacy and settings on X/Twitter
  4. Scroll down to Data sharing and personalization and choose Grok & Third-party Collaborators.Grok data sharing setting on X/Twitter
  5. Uncheck both boxes under Data sharing to turn off Grok training and personalization.how to opt out of Grok AI training

Read more: The privacy risks of generative AI

5. Be careful what you share with AI

Sharing personal information online is already a bad idea without involving AI, as fraudsters can use your details in identity theft scams. Cybercriminals can also take advantage of unknown security holes to access sensitive data you share with AI chatbots.

Basically, if you wouldn’t post your address, medical history, private conversations, or other sensitive details on Instagram, don’t feel comfortable sharing it with AI just because it acts like a harmless digital assistant.

Unfortunately, AI has also changed how creators need to approach content publishing. Companies like OpenAI now advocate for allowing AI training on copyrighted works. Meanwhile, the X Terms of Service flat out say they can use anything you post on the platform as training data for Grok or any other AI model made by X —with no compensation, by the way.

screenshot of the X terms of service showing their AI training policy

Good thing we turned off Grok, huh?

In any case, treat your posts as if they could be used for more than just sharing with your audience. Whether you’re posting a photo, article, or comment, be mindful of what you publish, as AI may inevitably scrape it for training.

6. Double-check AI content for accuracy

AI can be a great way to brainstorm ideas, sift through boring datasets looking for that one key detail, summarize articles into bite-sized info, automate repetitive tasks like data entry or report generation, and just save you time for more important work in general.

It can also be just flat-out wrong. In fact, researchers at the Tow Center for Digital Journalism found out that AI search engines are wrong 60% of the time on average. The worst part is the AI models sometimes make things up instead of refusing to answer on account of a lack of data.

Some free and open-source AI models (such as Jan.ai) aim to solve some of AI’s privacy concerns by not requiring an internet connection. Of course, this means you’re relying exclusively on pre-existing training data for any information, potentially making the models even more inaccurate than ChatGPT (or others with access to up-to-date web results).

Still, if you’re more concerned about how to stay safe while using AI than getting the latest info, these models might be worth a shot. Just as long as you take the time to double-check any key details the AI may “hallucinate” about.

7. Inspect AI-generated text for plagiarism

Many publications have turned to using AI apps to churn out content at a fast rate. The effectiveness of this is still up for debate, though the prevailing opinion is that a human touch and oversight is still necessary.

How so? Well, depending on the model you use, the AI could copy content word-for-word from elsewhere. This may result in:

  • DMCA takedown notices or copyright infringement claims
  • Loss of credibility as readers lose trust in your site
  • Reduced SEO rankings
  • Being fully excluded from search results on Google and other engines.

If you’re unsure whether your AI content is plagiarized, passing it through tools like Copyscape and Turnitin can be useful. However, you shouldn’t fully rely on them, as AI detection tools can frequently return false positives.

For instance, a study by Stanford University revealed that AI detectors falsely flagged over 60% of essays written by non-native English speakers as AI-generated, with 97% of the essays being flagged by at least one detection tool.

Remember that human oversight we talked about? Double-check and use your best judgment on whether the content is copy-pasted. Or, better yet, limit your use of AI to a tool that enhances your content rather than making it do all the work.

8. Keep an eye out for strange behavior or breaches

Watch out for any unusual behavior from AI apps, whether outputs seem off suddenly, the app behaves oddly, or unexpected requests pop up. These can be a sign of a compromised system or attempts to manipulate the AI.

AI services aren’t immune to breaches or misuse. If something feels off, double-check your settings, log out, and report it. Staying alert helps you catch potential risks before they cause bigger issues.

9. Teach your kids how to stay safe while using AI

Over half of all teens and young adults claim to use or have used generative AI in some shape or form, whether it’s to brainstorm ideas, help with studying or in their job, generate music and images, write code, or (spoiler alert) cheat on schoolwork.

Of course, the study linked above also shows that teens use AI to ask questions without fear of judgment, whether it’s about dating, conversation advice, or anything else they’re unsure about. Due to its ease of access and supportive responses, many turn to AI as a comfort tool, or simply as a place to vent thoughts they don’t want anyone else to know.

However, this reliance on AI can take a darker turn, with some teens claiming to use deepfakes and voice cloning tools to mislead parents and school staff or even worsen bullying. In more extreme cases, some may struggle to discern fiction from reality, as seen in the tragic case of the AI chatbot that drove a teen to suicide.

Now, more than ever, it’s important to have an honest talk with your kids about how to use AI safely. Here are some key points to bring up:

  • How it works: Use familiar examples (e.g., Siri, Alexa, ChatGPT, image recognition like in Google Photos, autocorrect, etc.) to help your kids understand how AI operates, its benefits, and its limitations.
  • How to use it: Encourage kids to explore AI as a tool for creativity and skill-building. Teach them to question the info AI provides and use it alongside their own thinking.
  • The risks of AI: Explain that AI can be wrong or biased and that cybercriminals can manipulate its data. Most importantly, ensure they understand that AI is not a substitute for real human connection and that they can always turn to you for help.

Once you cover these basics, it’ll be easier to get into conversations about privacy, avoiding oversharing, securing their data, and everything else covered on this list.

10. Stay updated on the latest AI regulations

As AI continues to grow, it’s important to stay informed about any changes in the rules surrounding its use.

The EU’s AI Act established a legal framework to ensure AI systems are trustworthy, prioritizing fundamental rights, safety, and ethical principles like respect for human autonomy, fairness, data privacy under the GDPR, and so on.

In contrast, the US AI Action Plan encouraged public input on shaping policies that will maintain America’s leadership in AI technology by promoting innovation and competitiveness while preventing overly restrictive regulations.

Key players like Google, Microsoft, OpenAI, Anthropic, and more than 16 other organizations chimed in on what they think is the best approach to regulating AI. They highlighted issues like national security, copyright laws, infrastructure development, export controls, and more.

If all this sounds overwhelming, don’t worry. Staying on top of AI regulations doesn’t have to be a headache. AI risk management firms can help keep your business compliant with all the latest laws in a way that keeps both you and your customers safe.

How to stay safe while using AI FAQs

How to avoid the risks of AI?

To avoid the risks of AI, use trusted apps and review privacy settings carefully. Be cautious about what you share online (both in and outside AI services), and watch out for suspicious app behavior. Regularly update your software, use 2FA and unique passwords, and double-check any AI-generated content to ensure it’s accurate.

Is ChatGPT safe for kids?

ChatGPT is not really safe for kids. Not only can it give inaccurate or inappropriate responses, but OpenAI requires children aged 13-18 to get parental consent before using the app. A better approach would be to join in on the learning process, guiding your kids without being overbearing and helping them use the AI safely.


Source link

Related Articles

Back to top button
close