A hacker said they purloined personal details from millions of OpenAI accounts-but scientists are hesitant, and the business is investigating.
OpenAI states it's investigating after a hacker claimed to have swiped login credentials for 20 million of the AI firm's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher posted a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering prospective buyers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being provided for sale "for simply a few dollars."
"I have over 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If genuine, this would be the third major security event for the AI company given that the release of ChatGPT to the public. Last year, a hacker got access to the company's internal Slack messaging system. According to The New York City Times, classifieds.ocala-news.com the hacker "stole details about the style of the business's A.I. technologies."
Before that, in 2023 an even simpler bug including jailbreaking triggers allowed hackers to obtain the personal information of OpenAI's paying customers.
This time, however, security scientists aren't even sure a hack happened. Daily Dot press reporter Mikael Thalan wrote on X that he found void email addresses in the supposed sample data: "No evidence (recommends) this alleged OpenAI breach is legitimate. A minimum of 2 addresses were void. The user's only other post on the forum is for a stealer log. Thread has actually considering that been erased too."
No evidence this supposed OpenAI breach is legitimate.
Contacted every email address from the purported sample of login qualifications.
A minimum of 2 addresses were void. The user's only other post on the online forum is for a stealer log. Thread has since been erased too. https://t.co/yKpmxKQhsP

- Mikael Thalen (@MikaelThalen) February 6, scientific-programs.science 2025
OpenAI takes it 'seriously'
In a declaration shown Decrypt, an OpenAI representative acknowledged the scenario while maintaining that the company's systems appeared protected.
"We take these claims seriously," the spokesperson said, including: "We have actually not seen any proof that this is connected to a compromise of OpenAI systems to date."
The scope of the alleged breach stimulated concerns due to OpenAI's enormous user base. Countless users worldwide rely on the company's tools like ChatGPT for company operations, academic purposes, and material generation. A legitimate breach could expose personal discussions, business tasks, and other sensitive information.

Until there's a final report, some preventive steps are always suggested:
- Go to the "Configurations" tab, log out from all connected devices, and allow two-factor authentication or 2FA. This makes it practically impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then create a virtual card number to handle OpenAI subscriptions. In this manner, it is much easier to identify and avoid fraud.
- Always keep an eye on the conversations kept in the chatbot's memory, and know any phishing attempts. OpenAI does not request any personal details, and any payment update is constantly handled through the main OpenAI.com link.