Be cautious about what you disclose to chatbots as your conversations could be utilized to enhance the AI system they are based on. ChatGPT, for instance, can adjust OpenAI’s algorithms using information you share with it, like seeking advice on a medical issue. Similarly, uploading sensitive data to Google’s Gemini for summarization can lead to the enhancement of its AI models.
AI models supporting popular chatbots have often been trained using vast amounts of data collected from the internet, such as blog posts, news articles, and social media comments, without explicit consent, posing copyright issues. Due to the complex nature of AI models, it may be challenging to remove any data that has already been utilized.
To prevent your chatbot interactions from being used for AI training, some companies offer an opt-out option:
Google Gemini retains conversations with its chatbot by default for 18 months for users aged 18 or above. Users can change this setting in the Activity tab on the Gemini website. Google advises against sharing confidential information with Gemini.
Meta’s AI chatbot on platforms like Facebook and Instagram is trained on public information from its platforms and other parts of the web. Users in the EU and UK can object to their information being used for Meta’s AI systems by accessing a form on the Facebook privacy page.
Users in the US and countries without strict data privacy laws lack the option to opt-out of AI training. Microsoft’s Copilot does not offer an opt-out for personal users, but interactions can be deleted in the settings and privacy page of the Microsoft account.
For OpenAI’s ChatGPT, users can disable the setting to “Improve the model for everyone” in the Data controls section of their account settings. Grok, Elon Musk’s AI chatbot, can use data from social media platforms for training, with the option to opt-out by adjusting settings on the desktop version.
Anthropic AI asserts that its chatbot Claude does not use personal data for training unless explicit permission is given. Users can allow specific responses to be used in training by providing feedback to the company or flagging conversations for safety review.