Table of Contents
A few weeks ago, in our post about the “Hidden Leaks”, we explored how employees might inadvertently share sensitive company data with AI platforms like ChatGPT. We briefly touched on how AI providers might handle this data, but now, let’s dive deeper into the real risks of ChatGPT – what companies face when employees unknowingly hand over proprietary information.
Risks of ChatGPT: Why AI Companies Access Your Data
Many assume AI providers like OpenAI collect and process user data solely to improve their models. Fortunately, evolving regulations require options for users to opt out of this data usage.
However, platforms like ChatGPT explicitly reserve the right to access user interactions for several reasons, including:
- Investigating security incidents
- Providing customer support
- Complying with legal obligations
- Improving AI performance (unless opted out)
This means that even if you opt out of training, your data may still be retained, reviewed, and potentially exposed.
What If I Say “No”?
Even if you disable Chat History or opt out of model training, OpenAI’s policies allow:
- Retention of conversations for up to 30 days, during which authorized personnel may review them for security or legal reasons.
- Access by internal teams and trusted providers (under confidentiality agreements) for investigating abuse, supporting users, or addressing legal matters.
As a leader (whether a CEO, CIO, or head of security), are you fully aware of the version and configuration of the ChatGPT platform your employees might be using? In a large organization, this can quickly spiral out of control.
Real-World Examples of AI Data Leaks
Samsung
In early 2023, Samsung lifted a ban on ChatGPT for its semiconductor division to enhance productivity. Within 20 days, three serious data breaches occurred:
- Source Code Leak: An engineer copied proprietary semiconductor measurement software code into ChatGPT for debugging, unintentionally sharing sensitive IP.
- Test Pattern Disclosure: Another employee uploaded confidential test sequences used in chip manufacturing—critical trade secrets.
- Internal Meeting Exposure: A staff member submitted a transcript of a confidential internal meeting to ChatGPT to generate presentation notes, exposing strategic discussions.
These incidents highlighted the risks of ChatGPT – how quickly control over sensitive data can be lost when using public AI platforms.
Amazon
In early 2023, Amazon issued internal warnings after employees shared confidential company data with ChatGPT. Legal reviews revealed that the AI’s outputs resembled internal Amazon data, raising alarms about the potential inclusion of proprietary information in AI training.
Walmart
Walmart distributed memos urging employees not to share any company information with public AI tools like ChatGPT, citing serious concerns about data leaks and the protection of trade secrets.
Dutch Healthcare Incident
A GP practice in the Netherlands reported a data breach after an employee entered patient medical data into an AI chatbot. This breach violated company policy and strict data protection laws, underscoring the particular risks faced by regulated sectors, such as healthcare.
The Consequences of Data Leaks
- Legal & Regulatory Risks: Breaches of data protection laws (such as GDPR and HIPAA) or exposure of trade secrets can result in severe fines, lawsuits, and reputational damage.
- Loss of Data Control: Once proprietary data is submitted to a public AI tool, you can’t track, retrieve, or audit it effectively.
- Security Vulnerabilities: Malicious actors could exploit improperly shared data, increasing cybersecurity threats.
What Can Organizations Do?
Protecting sensitive data while leveraging AI requires a multi-pronged approach:
Monitoring & Auditing: Implement tools and policies to monitor AI usage, enforce data protection rules, and identify vulnerabilities.
Employee Training: Just as we train employees to spot phishing attempts, we must train them to recognize the dangers of sharing proprietary data with public AI platforms.
Conclusion: Embrace Secure Private AI Agents Like COGNOS
The solution isn’t to abandon AI—it’s to embrace AI responsibly and securely.
COGNOS offers a powerful alternative to mitigate the risks of ChatGPT. Unlike public AI platforms like ChatGPT:
- COGNOS runs entirely within your infrastructure, disconnected from the public internet.
- It processes your data locally, ensuring zero risk of external exposure.
- Your proprietary information remains fully under your control, with no third-party access, even for troubleshooting or support.
Leading organizations are already leveraging private AI agents, such as COGNOS, to combine the power of AI with robust data security and compliance. Whether you’re handling trade secrets, legal documents, or sensitive customer data, adopting a private AI agent ensures you maintain both innovation and protection.