WormGPT Makes a Comeback Using Jailbroken Grok and Mixtral Models

Select Language

English

Down Icon

Select Country

America

Down Icon

WormGPT Makes a Comeback Using Jailbroken Grok and Mixtral Models

WormGPT Makes a Comeback Using Jailbroken Grok and Mixtral Models

Despite its reported shutdown in 2023, the WormGPT a type of uncensored artificial intelligence (AI) tool for illegal acts, is making a comeback. New research from Cato CTRL, the threat intelligence team at Cato Networks, reveals that WormGPT is now exploiting powerful large language models (LLMs) from well-known AI companies, including xAI’s Grok and Mistral AI’s Mixtral.

This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.

WormGPT first appeared in March 2023 on an underground online forum called Hack Forums, with its public release following later in mid-2023, as reported by Hackread.com. The creator, known by the alias Last, reportedly started developing the tool in February 2023.

WormGPT was initially based on GPT-J, an open-source LLM developed in 2021. It was offered for a subscription fee, typically between €60 to €100 per month, or €550 annually, with a private setup costing around €5,000.

However, the original WormGPT was shut down on August 8, 2023, after investigative reporter Brian Krebs published a story identifying the person behind Last as Rafael Morais, leading to widespread media attention.

Despite this, WormGPT has now become a recognized brand for a new group of such tools. Security researcher Vitaly Simonovich from Cato Networks stated, “WormGPT now serves as a recognizable brand for a new class of uncensored LLMs.”

He added that these new versions aren’t entirely new creations but are built by criminals cleverly changing existing LLMs. They do this by altering hidden instructions called system prompts and possibly by training the AI with illegal data.

Cato CTRL’s research found previously unreported WormGPT variants advertised on other cybercrime forums like BreachForums. For example, a variant named “xzin0vich-WormGPT” was posted on October 26, 2024, and “keanu-WormGPT” appeared on February 25, 2025. Access to these new versions is via Telegram chatbots, also on a subscription basis.

WormGPT Advert (Source: Cato CTRL)

Through their testing, Cato CTRL confirmed that keanu-WormGPT is powered by xAI’s Grok, while xzin0vich-WormGPT is based on Mistral AI’s Mixtral. This means criminals are successfully using top-tier commercial LLMs to generate malicious content like phishing emails and scripts for stealing information.

WormGPT Makes a Comeback Using Jailbroken Grok and Mixtral Models
keanu-WormGPT reveals the malicious chatbot has been powered by Grok (Screenshot: CATO Networks)

The emergence of these tools, alongside other uncensored LLMs like FraudGPT and DarkBERT, shows a growing market for AI-powered crime tools and highlights the constant challenge of securing AI systems.

J Stephen Kowski, Field CTO at SlashNext Email Security+ commented on the latest development stating, The WormGPT evolution shows how criminals are getting smarter about using AI tools – but let’s be honest, these are general-purpose tools and anyone building these tools without expecting malicious use in the long term was pretty naive.

What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime, he warned. This means organizations need to think beyond just blocking known bad tools and start looking at how AI-generated content behaves, regardless of which platform created it.

HackRead

HackRead

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow