Security Flaw in ChatGPT Operator Exposes Private Data to Hackers

OpenAI’s ChatGPT Operator, a tool that helps users browse the web and complete tasks, has a security flaw that could expose private data. Researchers found that hackers can manipulate the AI by sneaking hidden commands into websites or online text.
How the Attack Works
Hackers can trick ChatGPT Operator into opening secure web pages and copying sensitive information—like email addresses or phone numbers—without the user knowing. These hidden instructions can be placed in public platforms like GitHub or forums.
For example, a test showed ChatGPT Operator being fooled into copying a private email address from a user’s YC Hacker News account and pasting it into a hacker-controlled website. The same trick worked on Booking.com and The Guardian, proving the risk applies to many websites.
What OpenAI Is Doing About It
OpenAI has added some safety measures, such as:
- Asking for user approval before certain actions.
- Showing warnings when accessing important websites.
- Letting users monitor what the AI is doing.
However, hackers still find ways to bypass these protections, making the risk very real.
Why This Matters
If this flaw is exploited, hackers could steal private data from secure websites. Since ChatGPT Operator runs on OpenAI’s servers, session cookies and login details could also be at risk.
To fix this, OpenAI might need to improve its security tools and work with websites to block AI access to sensitive pages. Until then, staying alert and limiting what AI can access is the best way to stay safe.