Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • I interiorwork
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 44
    • Issues 44
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Adrianne Jonson
  • interiorwork
  • Issues
  • #19

Closed
Open
Created Feb 16, 2025 by Adrianne Jonson@adriannejonsonMaintainer

Decrypt's Art, Fashion, And Entertainment Hub


A hacker said they purloined private details from millions of OpenAI accounts-but scientists are skeptical, and the business is investigating.

OpenAI says it's examining after a hacker claimed to have swiped login credentials for 20 million of the AI firm's user accounts-and put them up for sale on a dark web forum.

The pseudonymous breacher posted a puzzling message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and using possible buyers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being offered for sale "for just a few dollars."

"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."

If genuine, this would be the third major security incident for the AI business since the release of ChatGPT to the public. Last year, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "took details about the style of the business's A.I. technologies."

Before that, in 2023 an even simpler bug including jailbreaking prompts enabled hackers to obtain the personal information of OpenAI's paying consumers.

This time, nevertheless, security scientists aren't even sure a hack happened. Daily Dot press reporter Mikael Thalan composed on X that he found void email addresses in the supposed sample information: "No evidence (suggests) this supposed OpenAI breach is legitimate. A minimum of two addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has given that been erased too."

No evidence this alleged OpenAI breach is genuine.

Contacted every email address from the purported sample of login qualifications.

A minimum of 2 addresses were void. The user's only other post on the forum is for a thief log. Thread has considering that been deleted as well. https://t.co/yKpmxKQhsP

- Mikael Thalen (@MikaelThalen) February 6, 2025

OpenAI takes it 'seriously'

In a statement shown Decrypt, an OpenAI representative acknowledged the circumstance while maintaining that the business's systems appeared safe and annunciogratis.net secure.

"We take these claims seriously," the spokesperson said, adding: "We have not seen any proof that this is linked to a compromise of OpenAI systems to date."

The scope of the supposed breach due to OpenAI's enormous user base. Millions of users worldwide count on the business's tools like ChatGPT for organization operations, educational purposes, and content generation. A legitimate breach might expose private conversations, business projects, allmy.bio and other delicate information.

Until there's a last report, some preventive measures are constantly suggested:

- Go to the "Configurations" tab, log out from all connected gadgets, and enable two-factor authentication or 2FA. This makes it practically difficult for a hacker to gain access to the account, even if the login and passwords are jeopardized.

  • If your bank supports it, then develop a virtual card number to manage OpenAI subscriptions. This method, it is easier to find and avoid scams.
  • Always keep an eye on the conversations saved in the chatbot's memory, and know any phishing efforts. OpenAI does not ask for any individual details, and any payment update is always dealt with through the main OpenAI.com link.
Assignee
Assign to
Time tracking