Dark AI: Cybercriminals' New Weapon Threatens Internet Users

25/08/2025 2

Dark AI is a term used to describe large language models or AI tools that are exploited outside the control of legitimate development organizations.

Dark AI: Cybercriminals' New Weapon Threatens Internet Users

In recent years, artificial intelligence (AI) has exploded and quickly become one of the most revolutionary technologies of mankind. AI is present in almost every field from business, healthcare, education to arts and entertainment. Large language models (LLMs) such as ChatGPT, Gemini, Claude or Copilot have changed the way people access knowledge, optimize work and create content. However, along with those positive aspects, another dark side of AI is also emerging. Experts call it Dark AI, artificial intelligence that is abused and transformed to serve illegal activities, cybercrime and information warfare.

At the Cyber ​​Security Week held in early August in Da Nang, Sergey Lozhkin, Head of Kaspersky’s Global Research and Analysis Team (GReAT), issued a sobering warning. He stressed: “We are entering a new era of cybersecurity and society, where AI acts as a shield for defense, and Dark AI is turned into a dangerous weapon in cyber attacks.”

This statement not only sounds the alarm bell but also shows that the global cybersecurity picture is entering an extremely complex phase. Dark AI is no longer just a vague concept, but has become a real threat, with the appearance of a series of tools such as WormGPT, FraudGPT, DarkBard or Xanthorox.

1. What is Dark AI and why is it dangerous?

Dark AI is a term used to describe large language models or AI tools that are exploited outside the control of legitimate development organizations. Unlike mainstream platforms like OpenAI’s ChatGPT or Google’s Gemini, which have layers of security, content filtering, and strict usage rules, Dark AI is designed to break all ethical barriers. They can be programmed or customized to create malware, write phishing emails that are more convincing than any scammer, produce deepfakes to manipulate public opinion, and even train hackers to break into systems.

The danger of Dark AI lies in the fact that it has no ethical or technical “checkpoints”. AI is just a tool and it will carry out any request from the controller. When a bad actor uses Dark AI, what they have in their hands is no different from an “all-powerful virtual assistant” ready to serve any illegal purpose. If in the past, to create a sophisticated malware or write a large-scale email phishing campaign, an attacker needed programming skills and a lot of research time, now Dark AI can do it in just a few minutes.

2. The Rise of Dark AI and Its Dangerous Variants

The emergence of Dark AI is closely linked to the Black Hat GPT wave from mid-2023. These are models that are “cracked”, edited or retrained with malicious data, aiming to serve cybercriminals. Notable variations include:

WormGPT : One of the first models to make a splash, specifically designed for phishing emails, spam content creation, and malware generation. WormGPT lacks any safety filters, allowing attackers to command it to write destructive code or create detailed phishing scripts.

FraudGPT : As the name suggests, FraudGPT focuses on supporting financial fraud activities. It can write fake bank emails, create phishing website instructions, or create fake documents that look exactly like the real thing.

DarkBard : A modified version of Google Bard, customized into a tool to spread false content and deepfakes.

Xanthorox : A model promoted on dark web forums, specializing in creating malicious scripts and supporting simulated attacks (red teaming) but serving malicious purposes.

These tools spread rapidly through the hacker community, especially on the dark web, where they were sold as “Malware-as-a-Service.” For a monthly fee or a package deal, anyone, even those without technical skills, could access and exploit Dark AI.

3. Dark AI and the global threat

According to expert Sergey Lozhkin, Dark AI is not just for individual cybercrime. Some organized hacker groups, even those backed by governments, are also starting to exploit this technology for complex campaigns. This brings the Dark AI issue from the level of ordinary cybercrime to the level of national and international security.

OpenAI has previously reported that it has disrupted more than 20 campaigns that used its AI tools to conduct cyberattacks and spread misinformation. In these campaigns, bad actors were able to mass-produce fake articles in multiple languages, bypassing conventional censorship systems, and thus easily sow confusion and division in the community.

From a cybersecurity perspective, this creates a worrying scenario: attackers now attack not only systems with technology, but also people with fake information, produced and disseminated at an unprecedented rate.

4. Current situation in Vietnam

In that global picture, Vietnam is not out of the vortex. According to Cisco's Cybersecurity 2025 report, up to 62% of organizations in Vietnam admit that they lack confidence in detecting employees using AI beyond their control. This is the Shadow AI phenomenon , when employees arbitrarily use AI tools in their work without supervision or clear policies from the IT department.

This inadvertently opens the door to risk. An employee could copy critical data and feed it into a public AI model for processing without knowing where that data is stored or how it could be exploited. At the same time, according to the survey, although 44% of employees in Vietnamese organizations use approved GenAI tools, up to 40% of IT departments admit to not knowing how their employees interact with AI.

If Dark AI is a threat from the outside, Shadow AI is a silent threat from within. Combining these two factors, the picture of Vietnam's cybersecurity in the AI ​​era becomes complex and full of challenges.

5. AI cannot distinguish right from wrong

One point to emphasize, as Mr. Lozhkin said: “AI does not inherently distinguish right from wrong, it only follows instructions.” This is the core element that makes AI both a useful tool and a dangerous weapon.

Artificial intelligence has no concept of morality, it only processes information and produces results as required. What determines the nature of AI is the purpose of its users. If trained and operated in a healthy environment, AI will bring great benefits. But when falling into the wrong hands, AI can become a "powerful assistant" for illegal activities.

This places a huge responsibility on the technology development community: how to design strong enough control mechanisms, while raising social awareness so that users are not easily seduced by Dark AI tools.

6. Solutions to deal with Dark AI

In the face of increasing risks, cybersecurity experts such as Kaspersky have made many recommendations. One of the most important measures is to tighten access controls and train employees . Organizational personnel need to be equipped with knowledge to recognize the risks when using AI, as well as have a clear understanding of data security policies.

Additionally, organizations need to invest in a security operations center (SOC) that can monitor and detect threats in real time. In addition, deploying next-generation security solutions that integrate AI-powered attack detection capabilities is also an inevitable trend.

For individuals, heightened awareness is key. Users need to be aware that the emails, messages, and videos they see online may not be human-generated, but rather the work of Dark AI. Fact-checking skills, critical thinking, and caution in sharing personal data will be essential “shields.”

7. The Future of Dark AI 

Many experts believe that Dark AI will continue to develop alongside mainstream AI. Any technology has its good and bad sides, and history has proven that it is impossible to completely eliminate the dark side. The problem is how society will manage and deal with it.

Some possible future scenarios:

Dark AI as a cyberwarfare tool: Nations could develop AI for military purposes, including attacking enemy critical infrastructure.

Large-scale deepfake explosion: Fake videos and images are so sophisticated that it is impossible to distinguish between real and fake, causing a crisis of trust in society.

AI Supports Individual Crime: Small-time scammers can also use Dark AI to reach thousands of victims at once, multiplying the damage.

Competition to control AI: Tech companies and governments may race to build regulatory systems, leading to conflicts of interest.

In all these scenarios, the common thread is the need for an international legal system and global standards for AI ethics.

8. Conclusion

Dark AI is not just a technical term but a symbol of the dark side of technological progress. It shows that every step forward of humanity always comes with new challenges. As AI becomes an indispensable part of life, preventing its transformation becomes more urgent. To deal with it, not only cybersecurity organizations or governments must take action but each individual also needs to take responsibility in using AI safely and ethically. Like a double-edged sword, AI can be a life-saving tool or a destructive weapon, it all depends on how we choose. In the digital age, where Dark AI is rising strongly, sobriety and global cooperation will be the key for humanity to turn AI into a protective shield, instead of a weapon of self-destruction.

 

 
Sadesign Co., Ltd. provides the world's No. 1 warehouse of cheap copyrighted software with quality: Panel Retouch, Adobe Photoshop Full App, Premiere, Illustrator, CorelDraw, Chat GPT, Capcut Pro, Canva Pro, Windows Copyright Key, Office 365 , Spotify, Duolingo, Udemy, Zoom Pro...
Contact information
SADESIGN software Company Limited
 
Sadesign Co., Ltd. provides the world's No. 1 warehouse of cheap copyrighted software with quality: Panel Retouch, Adobe Photoshop Full App, Premiere, Illustrator, CorelDraw, Chat GPT, Capcut Pro, Canva Pro, Windows Copyright Key, Office 365 , Spotify, Duolingo, Udemy, Zoom Pro...
Contact information
SADESIGN software Company Limited
Hotline
Confirm Reset Key/Change Device

Are you sure you want to Reset Key/Change Device on this Key?

The computer that has this Key activated will be removed and you can use this Key to activate it on any computer.