back to top
Monday, April 7, 2025
HomeBillionairesThings You Should Never Reveal to an AI Bot

Things You Should Never Reveal to an AI Bot

ChatGPT has changed the way many of us work and live our day-to-day lives. According to recent stats, over 100 million of us use it every day to process over one billion queries.

But the world-conquering LLM chatbot has been described as a “privacy black hole,” with concerns about the way it treats data entered by users, which even led to it being briefly banned in Italy.

Its creator, OpenAI, makes no secret of the fact that any data entered may not be secure. As well as put to use to further train its models, possibly leading to its exposure in output to other people, it can be reviewed by humans to check for compliance with rules about how it can be used. And, of course, any data sent to any cloud service is only as secure as the provider’s security.

What all this means is that any data whatsoever entered into it should be considered public information. With this in mind, there are several things that absolutely should never be told to it – or any other public cloud-based chatbot. Let’s run through some of them:

Illegal Or Unethical Requests

Most AI chatbots have safeguards designed to prevent them from being used for unethical purposes. And if your question or request touches on activities that could be illegal, it’s possible you could find yourself in hot water. Examples of things that are definitely a bad idea to ask a public chatbot are how to commit crimes, carry out fraudulent activity, or manipulate people into taking action that could be harmful.

Many usage policies make it clear that illegal requests or seeking to use AI to carry out illegal activities could result in users being reported to authorities. These laws can vary widely depending on where you are. For example, China’s AI laws forbid using AI to undermine state authority or social stability, and the EU AI act states that “deepfake” images or videos that appear to be of real people but are, in fact, AI-generated must be clearly labeled. In the UK, the Online Safety Act makes it a criminal offense to share AI-generated explicit images without consent.

Inputting requests for illegal material or information that could harm others isn’t just morally wrong; it can lead to severe legal consequences and reputational damage.

Logins And Passwords

With the rise of agentic AI, many more of us will find ourselves using AI that’s capable of connecting to and using third-party services. It’s possible that in order to do this, they need our login credentials; however, giving them access could be a bad idea. Once data has gone into a public chatbot, there’s very little control over what happens to it, and there have been cases of personal data entered by one user being exposed in responses to other users. Clearly, this could be a privacy nightmare, so as things stand, it’s a good idea to simply avoid any kind of interaction with AI that involves giving it access to usernames and accounts unless you’re entirely sure you are using a very secure system.

Financial Information

For similar reasons, it’s probably not a great idea to start putting data such as bank accounts or credit card numbers into genAI chatbots. These should only ever be entered into secure systems used for e-commerce or online banking, which have built-in safety guards like encryption and automatic deletion of data once they have been processed. Chatbots have none of these safeguards. In fact, once data goes in, there’s no way to know what will happen with it, and putting in this highly sensitive information could leave you exposed to fraud, identity theft, phishing and ransomware attacks.

Confidential Information

Everyone has a duty of confidentiality to safeguard sensitive information for which they’re responsible. Many of these duties are automatic, such as confidentiality between professionals (e.g. doctors, lawyers and accountant and their clients). But many employees also have an implied duty of confidentiality to their employers. Sharing business documents, such as notes and minutes of meetings or transactional records, could well constitute sharing trade secrets and a breach of confidentiality, as in the case involving Samsung employees in 2023. So, no matter how tempting it could be to cram them all into ChatGPT to see what sort of insights it can dig out, this isn’t a good idea unless you’re totally sure that the information is safe to share.

Medical Information

We all know that it can be tempting to ask ChatGPT to be your doctor and diagnose medical issues. But this should always be done with extreme caution, particularly given that recent updates enable it to “remember” and even pull information together from different chats to help it understand users better. None of these functions come with any privacy guarantees, so it’s best to be aware that we really have very little control over what happens to any of the information we enter. Of course, this is doubly true for health-related businesses dealing with patient information, which risk huge fines and reputational damage.

Summing Up

As with anything we put onto the internet, it’s a good idea to assume that there’s no guarantee it will remain private forever. So, it’s best not to disclose anything that you wouldn’t be happy for the world to know. As chatbots and AI agents play an increasingly big role in our lives, this will become a more pressing concern, and educating users on the risks will be an important responsibility for anyone providing this type of service. However, we should remember that we have personal responsibility, too, for taking care of our own data and understanding how to keep it safe.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments