Now ChatGPT has a dark side twin: FraudGPT

Share

ChatGPT has gained a lot of attention because it’s become really popular. It’s changing how people work and what they discover online. Even if someone hasn’t tried it themselves, they’re curious about AI chatbots. It’s like when you hear about a new gadget or game, you want to know what it’s all about. But there’s a downside to this AI excitement. Some not-so-good folks on the hidden part of the internet, called the Dark Web, are talking about AI too. They’re not interested in using AI for good. Instead, they’re trying to figure out how they can use it for bad stuff.

Here’s where things get concerning. Researchers from a company called Netenrich found something that’s quite worrying. They discovered an AI tool with a not-so-nice purpose, and it’s called “FraudGPT.” Unlike helpful AI that can do cool things, this AI is meant for doing harmful activities. Imagine if someone used it to send tricky emails to steal personal information from others. Or if they used it to make tools that could break into secure systems. Even worse, they’re using it for illegal activities involving credit cards and such. What’s really surprising is that this dangerous AI tool is being sold on the Dark Web. It’s like a digital black market where people can buy and sell things they shouldn’t.

What’s more, you can even find this “FraudGPT” tool on a messaging app called Telegram. So, the bad guys are trying to make it easy for anyone to get their hands on this harmful AI, which is definitely a cause for concern. It’s a reminder that while AI can do amazing things, there are some who want to use it for not-so-amazing things.

What is FraudGPT?

Just like how ChatGPT is used to have conversations and provide information, FraudGPT is a similar kind of AI, but with a darker purpose. It’s designed to create content that can be used for cyberattacks, which are like digital crimes. This harmful tool is available for people to buy on the dark web, which is a hidden part of the internet where illegal stuff often happens. You can also find it on Telegram, a messaging app.

The Netenrich threat research team noticed this dangerous tool in July 2023. One of the things that FraudGPT is proud of is that it has some safeguards in place, just like ChatGPT does. These safeguards make it less likely to respond to questions that might be suspicious or raise alarms.

What’s really concerning is that the people behind FraudGPT are actively updating it every week or two. They’re using different types of artificial intelligence to make it work better and do more harmful things. If someone wants to use FraudGPT, they have to pay for it. They offer two main options: you can either pay $200 every month for a subscription, or you can pay $1,700 for a whole year. This shows that the people selling this dangerous AI tool are treating it like a business, which is quite unsettling. It’s a reminder that while AI can be used for good things, there are those who want to use it for harmful activities, and that’s something we need to be aware of.

How does it work?

The Netenrich team went ahead and spent money to actually try out FraudGPT. When they used it, they noticed that it looks quite similar to ChatGPT. The layout is almost the same, with a list of the things the user asked on the left side and the main chat window taking up most of the screen. To get a response from FraudGPT, you just have to type your question in the box and press “Enter.”

They tested FraudGPT with a phishing email that pretended to be from a bank. All they did was put the bank’s name in the question, and FraudGPT did the rest. It even pointed out where a dangerous link could be added to the text. This means it’s capable of creating scam websites that try to steal personal info from people.

Another test they did was asking FraudGPT to name the websites that are often visited or used for attacks. This info could be useful for hackers to plan their next moves. There was an online ad that claimed FraudGPT could make harmful code to create invisible malware that can find weak spots and targets.

The people behind FraudGPT were found to have offered hacking services for hire in the past. They were also connected to another program called WormGPT that’s similar.

This whole investigation into FraudGPT is a reminder that we need to be cautious. We’re not sure if hackers have already started using these tools to make new kinds of threats. But these harmful programs could help hackers work faster. They could make fake emails and websites in just seconds.

So, people need to stay careful about giving away their personal info and follow good online safety habits. People who work in cybersecurity should also keep their tools updated. Hackers might use programs like FraudGPT to target important computer systems directly.

This whole situation with FraudGPT shows that hackers change how they do things over time. Even though open-source software can be helpful, it can also have security problems. Anyone who uses the internet or is responsible for keeping online systems safe has to keep up with new technologies and the dangers they bring. The key is to remember the risks, even when using seemingly harmless programs like ChatGPT.

Read more

Recommended For You