ChatGPT is being used to create malware.

newsmeki Team

Today's most famous AI software makes malware creation more manageable than ever.

Cybersecurity firm WithSecure has confirmed that it has found several examples of malware created using OpenAI's chatbot in action. ChatGPT is especially dangerous because it can create countless malware variants, making them very difficult to detect.

The bad guys need to provide ChatGPT with illustrative examples of the existing malware's source code and instruct the chatbot to generate new lines of code based on the existing ones, which makes it possible for the malware to last longer without the time, effort, and expertise it used to.


This news comes as there is a lot of talk about regulating AI to prevent it from being used for malicious purposes. No regulations have governed the use of ChatGPT since the service's launch in late 2022.

While there are certain protections in place within OpenAI to prevent chatbots from executing nefarious commands, there are ways for bad guys to bypass these measures.

WithSecure CEO Juhani Hintikka told Infosecurity that cybersecurity defenders often use AI to find and remove malware. However, this situation is changing with the availability and free-of-charge of powerful AI tools like ChatGPT today. Remote access tools have been used for illegal purposes, and so is AI.

Furthermore, ransomware attacks are increasing at an alarming rate. Threat actors are reinvesting and becoming more organized, expanding operations by outsourcing and developing their understanding of AI, thereby carrying out attacks on a larger scale and more successfully.

Ultimately, CEO Hintikka concluded that the future landscape of cybersecurity would be a big hand between good AI and bad AI.

Source from the Internet

You might also like

How do you feel?