There is no denying that ChatGPT, an AI chatbot from OpenAI, is incredibly well-liked and is largely responsible for bringing chatbots and AI language models into the public.
But certain negative impacts come along with this popularity. One reason is that ChatGPT accounts are currently a top target for hackers.
Researchers from the cybersecurity company Group-IB announce that they have discovered over 101,000 compromised ChatGPT login credentials for sale on dark web markets over the course of the past year in a new report that was just recently published(opens in a new tab).
Only a few months after its initial public launch, ChatGPT surpassed 100 million users. The popularity of the AI chatbot has increased over the months, but so have the instances of login information being stolen from ChatGPT accounts. More than 26,800 ChatGPT credentials were discovered last month, according to Group-IB, a record high since they started monitoring the data.
The majority of these stolen ChatGPT credentials, according to Group-IB analysts, were acquired via the well-known Raccoon malware. Raccoon performs the same functions as standard malware, downloading itself onto a target’s computer and collecting data from it after the victim installs it—often posing as a useful app or file. However, Raccoon is a well-liked option among hackers because it is simple to use and is accessible as a solid, maintained membership service.
There are several potential security issues that are specific to having a ChatGPT account that has been breached by hackers. One example is a feature that saves a user’s chat history that OpenAI published a few months ago. Many businesses, including Google, advise their employees against entering confidential material onto ChatGPT since those details might be used to build AI language models. However, the fact that they must inform staff members of this implies that it does occur. A hacker who has access to a user’s ChatGPT history can see all the private data that user has previously entered.
According to a statement from Dmitry Shestakov, Head of Threat Intelligence at Group-IB, “Many enterprises are integrating ChatGPT into their operational flow.” Employees can utilize the bot to optimize their proprietary code or enter confidential correspondence. Given that ChatGPT’s default setting saves all discussions, threat actors could unintentionally gain access to a wealth of important information if they manage to get account credentials.
Additionally, if a person uses the same password across numerous platforms, a hacker who now has access to their ChatGPT account may soon also have access to all of their other accounts. Additionally, if the target purchases ChatGPT Plus, the premium version of the service, they can unknowingly be financing the use of the paid-for service by others.
Users of ChatGPT should be wary about unwanted access to their accounts and avoid using the same password across several sites.