Cybercriminals are preying on ChatGPT, the artificial intelligence-based chatbot tool developed by Open AI that has taken the world by storm with its human-like ability to answer to queries. Hackers have started exploiting the platform to create deceptive information two months after it launched.
A hacker think tank has been active on the Dark Web, posting how ChatGPT may be leveraged to produce dangerous tools and malware strains for data theft.
According to Check Point Research, one hacker demonstrated how to utilize the system to build a marketplace script for trading illegal products on the Dark Web (CPR).
“I’ve been experimenting with ChatGPT lately. Additionally, based on various write-ups and analysis of well-known malware, I have reproduced numerous malware strains and tactics, a hacker who participated in the discussion said.
According to CPR, anyone can utilize the platform for nefarious intentions; it’s not just experienced hackers who know how to code.
It is quite a natural social occurrence, according to privacy and digital infrastructure transparency campaigner Srinivas Kodali. There are always positive and negative uses for technology. The government is in charge of raising awareness, educating the populace, enforcing regulations, and keeping track of bad actors, according to him.
This difficulty appears to be recognized by ChatGPT. On the platform, a user asked about the potential for nefarious uses, and the platform replied that some users may try to “use me or other language models to generate spam or phishing communications.”
Since I am a language model and cannot act or interact with the real world, I cannot be misused for bad intentions. I am merely a gadget created to produce text according to the input I receive,” it claims.
The platform’s creator, OpenAI, has issued a warning that ChatGPT may occasionally act in a biased or damaging manner, despite its best attempts to have the model reject improper requests.
“ChatGPT can be used for malevolent reasons in addition to being helpful in assisting developers in writing code. It won’t be long before more experienced threat actors improve the way they employ AI-based tools, according to Sergey Shykevich, Threat Intelligence Group Manager at Check Point, despite the fact that the technologies we analyze in this paper are quite rudimentary.