FraudGPT: AI for the Darknet

07/31/2023Darknet News

The development of artificial intelligence has swept the globe and one of the latest is FraudGPT. Although AI technologies claim to make life appear simple, there is a narrow line between what is theoretically possible and what is actually possible. In the past six months, we have seen the seemingly limitless potential of AI as well as its potential dangers in the shape of false information, deepfakes, and the loss of human jobs.

In the past few months, stories about ChaosGPT and the dark web’s use of AI to create havoc have dominated news feeds. The threat posed by AI now appears to have a new facet. After the notorious cybercriminal tool WormGPT, an even more dangerous AI tool has emerged. According to reports, some individuals are marketing FraudGPT, a generative AI for cybercrime, on dark web marketplaces and Telegram groups.

According to reports, FraudGPT is a bot that is employed for crimes including developing cracking tools and phishing emails, among others. It can be used to produce malware that is undetected, write malicious code, find security holes, and discover leaks. Since July 22, the chatbot has been active on Telegram and Dark Web forums. According to reports, a monthly subscription costs $200 and can cost up to $1000 for six months or $1700 for a year.

What is FraudGPT?

The caption “Chat GPT Fraud Bot | Bot without limitations, rules, boundaries” can be seen on the chatbot’s screen in a screenshot that has been circulating online. “If you’re looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone’s individual needs with no boundary,” the text on the screen continues, “then look no further!”

Given that FraudGPT is capable of a variety of tasks, including the creation of phishing pages and the development of harmful code, cybercriminals have come to view it as an all-in-one solution. With the aid of a program like FraudGPT, con artists may now appear more convincing and realistic while causing more extensive harm. Security professionals have emphasized the need for innovation to tackle the dangers posed by rogue AI, like FraudGPT, which may end up doing greater harm. Unfortunately, a lot of people in the field believe that this is just the beginning and that the strength of AI will enable bad actors to accomplish anything.

Another AI cybercrime tool, WormGPT, surfaced earlier this month. It was promoted as a tool for sophisticated phishing and business email compromise attacks on numerous forums on the Dark Web. It was referred to by experts as a harmful alternative to GPT models that used blackhat techniques.

In February, it was discovered that cybercriminals were using ChatGPT’s APIs to get around the software’s limitations. The fact that FraudGPT and WormGPT both operate with no regard for morality is sufficient proof of the dangers that unrestrained generative AI poses.