Chatting with AI is still a relatively new phenomenon.
Though turning to chatbots for recipe ideas, travel planning and quick answers is harmless (for the most part), there are many issues to be wary of when it comes to AI safety.
We often share highly personal information online, but the same confidentiality protections — those you enjoy with human lawyers, therapists and doctors — don’t apply to AI chatbots. Many users employ ChatGPT as a virtual life coach, sharing personal and professional details and problems through the app or program. There’s also a cognitive risk associated with using a large language model, as more studies begin to examine how reliance on chatbots affects memory retention, creativity and writing fluency.
Here’s a guide to being cautious with chatbots. We’ll walk you through why it’s important to avoid handing over sensitive data, how to navigate mental health concerns and what you can do to prevent long-term cognitive atrophy due to not exercising certain parts of your brain.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
1. Treat AI chatbots as public environments
Remember that AI chatbots are “public environments,” not private conversations, says Matthew Stern, a cyber investigator and CEO at CNC Intelligence.
“If we keep that in mind, we will be less likely to share sensitive data that may become visible to others,” Stern says.
Since chatbot histories have become searchable online, Stern says to be concerned about your conversations getting indexed by search engines.
Avoid sharing any personally identifiable information, such as your full name, address, financial details, business data and medical results. The more you share, the more personalized your results will be. Sure, that might sound like a good thing on the surface.
But handing over sensitive data to a tech company should give you pause. Even if those details don’t become publicly searchable, you never know what information data brokers will be buying and selling about you.
2. Don’t overshare your mental state
Chatbots can be useful assistants, but they aren’t your friends, says Elie Berreby, the head of SEO and AI Search at Adorama. He suggests “guarding your secrets” and never discussing your mental state, fears or health concerns. Such data can be used to identify hidden patterns and subconscious intentions, creating a vulnerability profile.
“Do not overshare. They already know more about you than you could imagine,” says Berreby.
Also, keep in mind that the primary goal of AI chatbots is monetization, i.e., to generate revenue.
“Soon, this personalization will be used to show you ultra-targeted ads,” he says. “This data is priceless for advertisers, but it creates a surveillance profile deeper than anything we’ve seen until now.”
3. Don’t ‘bring your whole self’ to the chatbot
AI chatbots exist within attention economies, where your engagement is the product, says Intercultural Strategist Annalisa Nash Fernandez.
“If chatbots ultimately monetize through data collection and user retention, memory features become engagement tools disguised as personalization, because attention is upstream of everything, including your privacy,” she says.
Disable memory features to reduce what the systems retain about you. For ChatGPT, navigate to Settings > Personalization > turn off Memory and Record Mode.
Disable memory features on ChatGPT.
ChatGPT/Screenshot by CNETUse secondary email addresses, so that chatbots don’t have this type of identifier for you — emails are “the connective tissue linking disparate data points,” Fernandez says.
Opt out of training, so the chatbot won’t train itself on your inputs. In ChatGPT, click on your profile/name, select Settings > then Improve the model for everyone > and turn it off.
Opt out of training on ChatGPT.
ChatGPT/Screenshot by CNETBerreby also advises you to “fragment your data” by switching between different AI chatbots to avoid giving one single entity a complete picture of your life.
4. Export your data
Whichever AI chatbot you’re using, regularly export your data to see what information it has stored about you.
In ChatGPT, go to Settings > Data Controls > Export Data. It’ll email you a link with a ZIP file of text and photos.
Regularly export your data from ChatGPT to see what information the chatbot is storing about you.
ChatGPT / Screenshot by CNET5. Fact-check everything
Always err on the side of caution with AI-generated content. Expect errors and approach information with doubt. AI chatbots are designed to be helpful — they’re the ultimate people pleaser. This doesn’t mean the information is true or accurate.
Cognitive bias is also an issue with chatbots. If you’re using it as a thought partner, it will mirror back what you put in, essentially becoming an ultimate echo chamber.
Always check its sources and ask where it obtained the information. AI hallucinations also occur, where chatbots falsify information based on either unreliable online sources or by drawing incorrect conclusions.
6. Watch out for sneaky scammers
AI chatbots are capable of maintaining multiturn conversations, says Ron Kerbs, CEO of Kidas, a company that protects against scams and online threats. These back-and-forth interactions could be mimicked by bad actors on dodgy websites posing as helpful customer service chatbots.
“While large platforms like ChatGPT are generally secure, the risk lies in users unintentionally sharing access credentials through phishing links or fake login pages, often distributed via email, SMS or cloned websites,” Kerbs says. “Once credentials are compromised, a scammer could misuse the account, especially if it’s linked to saved payment methods.”
Kerbs says you must enable two-factor authentication, monitor account access and avoid logging in through third-party links. That might be less convenient, but it’s a small price to pay.
While there’s no antivirus equivalent for AI chatbots yet, some tools offer scam detection as a layer of everyday protection, especially when embedded within messaging platforms and service providers.
Kerbs says it’s essential not only to scan your hard drive for viruses, but also to monitor your interactions via SMS, email and voice calls for potential scams. Deepfake protection can also analyze audio and video to detect if the person you’re speaking to is an AI clone.
7. Confide in people, not AI
This tip isn’t tactical, but it’s important: While you might see no harm in speaking to ChatGPT, Claude or Gemini about a problem you’re having, it’s a slippery slope to using a chatbot as a diary.
Instead, call up a good friend or plan a catch-up to share what you’re going through with someone who cares about you — not a predictive AI model that’s been trained by strangers.
8. Practice (and protect) critical thinking
Don’t outsource your thinking to AI. An ongoing MIT study (yet to be peer-reviewed) conducted a preliminary exploration of the potential for large language models to be detrimental to our mental state, showing “weaker neural connectivity” in the brains of participants who used ChatGPT.
Use AI for low-level tasks, but keep the creating, thinking and strategizing out of the algorithms.
Here are the best things to use AI for, as well as the worst.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)