Craft CMS Under Fire: Hackers Exploit Serious Security Holes, Hundreds of Sites Likely Hit
Hold on to your hats, folks, because there’s some serious stuff going down in the world of web security. It looks like hackers have found a way to sneak into websites running on Craft CMS, and it’s not pretty. We’re talking about a chain reaction of security flaws, specifically CVE-2024-58136 and CVE-2025-32432 – that, when combined, act like a skeleton key for bad actors.
Think of it this way: there’s a back door (CVE-2024-58136) that, on its own, might not be a huge deal. But when you pair it with another weakness (CVE-2025-32432) in how Craft CMS handles images, suddenly these cyber crooks can waltz right in and start messing with things.
The really sneaky part? These vulnerabilities live in Craft CMS’s image transformation feature. Apparently, crafty attackers are sending out a bunch of digital probes, trying to guess valid image IDs. Once they hit the jackpot, they unleash a little piece of code – a Python script – that grabs a malicious PHP file from a corner of the internet (GitHub, to be exact) and drops it onto the targeted website’s server.
Fast forward to April 18, 2025, and the numbers are looking grim. Security researchers have spotted around 13,000 Craft CMS sites that are potentially vulnerable, and a worrying 300 of them are already showing signs of being compromised.
Now, if you’re a Craft CMS user and this news has you sweating a bit, here’s the lowdown on what you should be doing right now if you suspect trouble:
Change your digital locks: Refresh those security keys.
Mix up your secret sauce: Rotate your database credentials.
Whisper new passwords: Reset all user passwords, just to be safe.
Build a digital wall: Tell your firewall to block any suspicious traffic.
Let’s break down how this all unfolded:
Around Valentine’s Day (February 14, 2025): The folks at Orange Cyberdefense SensePost were the first to raise the alarm, spotting these attacks in the wild.
A few days prior (February 10th-11th): It seems the attackers were busy tweaking their tools, repeatedly testing if they could sneak a file called filemanager.php onto servers.
Mid-February (February 12th): They got a bit smarter, renaming their malicious file to the less obvious autoload_classmap.php.
Still mid-February (February 14th): That renamed file started causing trouble.
The good news (eventually): The Craft CMS team rolled out fixes for the main vulnerability (CVE-2025-32432) in versions 3.9.15, 4.14.15, and 5.6.17.
As of mid-April (April 18, 2025): The scale of the problem became clearer, with thousands of vulnerable sites identified and hundreds potentially breached.
What’s the big deal if your site gets hacked? Well, imagine someone breaking into your house. They could steal your valuables (data breaches), spray-paint graffiti all over the walls (website defacement), or even use your place as a base for more shady activities (denial-of-service, launching other attacks). For website owners, this can mean losing trust, facing legal issues, and a whole lot of headaches. And for regular visitors, it could mean their personal info is at risk or they end up on dangerous websites.
While we don’t have a clear picture of who’s behind these attacks and why, the way they’re going about it, chaining vulnerabilities and using scripts to automate the process, suggests they know what they’re doing. Their aim is likely to gain control of web servers for all sorts of malicious purposes, from stealing data to planting malware or even using the compromised sites as part of a larger network of infected computers.
The Craft CMS team has been on the ball, pushing out those crucial updates and offering advice. But the sheer number of vulnerable sites out there highlights a common problem: not everyone updates their software as quickly as they should. Security experts are constantly reminding us that keeping our website software up to date is one of the most important things we can do to stay safe online.
So, what’s the takeaway? This whole Craft CMS situation is a wake-up call. It screams the importance of staying on top of security updates and having good security habits. If you’re running a Craft CMS site, please, please make sure you’ve installed the latest patches and take a good look under the hood for anything suspicious. The web can be a dangerous place, and staying vigilant is our best defence against the bad guys.
Darcula Phishing-as-a-Service Kit Adds Generative AI to its Suite
Darcula is a phishing-as-a-service (PhaaS) platform developed to provide another income stream to threat actors, which allows users to create and deploy phishing campaigns with ease. The PhaaS platform offers easy deployment of phishing sites using hundreds of templates of well-known, worldwide brands, and is available for a paid monthly subscription. Think Netflix for cybercriminals. The Darcula platform claims to support around 200 phishing templates, covering a large range of brands based in over 100 different countries.
However, recent updates to the Darcula platform have added in new features that add generative artificial intelligence (GenAI) capabilities to the service. The implementation of GenAI within Darcula now allows less tech-savvy cybercriminals to deploy customised phishing scams within minutes, lowering the barrier for entry significantly.
The GenAI adds new features into the platform including multi-language support and form generation, meaning that a user could generate phishing scams in multiple different languages with minimal input and no prior knowledge of any of the languages.
The GenAI update comes shortly after Darcula operators were seen testing a major update for the PhaaS platform which would allows users to clone any legitimate website and create a phishing version, further adding to the capabilities on offer from the subscription service.
With the rise in Phishing-as-a-Service platforms and their increasing capabilities, it is important to stay safe online. The Cyber Safety and Phishing module from Norm Cyber can educate users on how to spot a likely malicious link. With this education, not only would users be more aware of the tactics used by attackers but also the content will enable them to exercise caution when clicking on suspicious links. It is also recommended to take a minute to assess a link before clicking and never give any remote access to your device.
Prompt Injection Attacks: A Cyber Security Imperative for Large Language Models
The unstoppable rise of large language models (LLMs) across cyber security, healthcare, finance, and retail brings transformative potential but grave risks. Prompt injection attacks, ranked in the OWASP Top 10 for LLMs, exploit LLMs’ natural language processing to bypass safety protocols, extract sensitive data, or execute malicious code. Since 2022, accessible techniques like Policy Puppetry threaten LLMs in threat detection, SOC operations, and enterprise systems. Cyber security professionals lead the charge to safeguard these systems. This bulletin outlines the types, risks, real-world examples, and mitigation strategies.
Types of Prompt Injection Attacks Attackers trick LLMs by embedding malicious instructions, exploiting their inability to distinguish trusted directives:
Policy Puppetry: Refers to advanced attack techniques where malicious prompts are disguised as legitimate policy structures, such as XML or JSON formats. Adversaries employ methods including leetspeak, role simulation, and token smuggling to obscure their intent further. These techniques allow attackers to embed hidden payloads within prompts, facilitating cross-model attacks and enabling multi-stage exploitation strategies aimed at extracting or manipulating system-level prompts.
Direct Prompt Injection: Direct prompt injection occurs when an attacker submits input that deliberately manipulates a language model’s behaviour. By directly providing malicious instructions. For example, telling the model to ignore prior commands or perform unauthorised actions, the attacker can override the model’s original purpose.
Indirect Prompt Injection: Indirect prompt injection, on the other hand, involves embedding malicious instructions within external content that the model is later asked to process. Instead of interacting with the model directly, the attacker places hidden prompts inside web pages, documents, or other data sources. When the model ingests or summarises this compromised content, it unknowingly follows the attacker’s hidden instructions, potentially spreading false information or directing users to malicious sites. This form of attack exploits the model’s inability to differentiate between safe content and embedded commands within the material it analyses
Jailbreaking: Refers to the deliberate creation of prompts designed to bypass safety protocols, enabling the generation of harmful content, such as CBRN (Chemical, Biological, Radiological, and Nuclear) materials or disinformation. This exploitation undermines ethical safeguards and poses significant risks to both system integrity and public safety.
Prompt Leaking: Attacks in which adversaries extract system prompts, potentially revealing proprietary logic, operational instructions, and sensitive enterprise information. This exposure can undermine security controls, compromise intellectual property, and enable further targeted attacks against the organisation’s systems.
Key Risks Prompt injection attacks threaten multiple sectors. Data breaches expose patient records, financial transactions, or customer data, breaching compliance. Equally critical, malicious content generation produces phishing scripts or misinformation, enabling fraud. System compromise via malicious models grants backend access. Supply chain vulnerabilities, such as malicious models on platforms like Hugging Face, introduce hidden threats across industries. Operational disruption from compromised LLMs yields false threat detections or business failures, eroding trust.
Hugging Face Breach: Malicious AI models with pickled files executed code, targeting critical cyber security tools.
Bing Chat Exploit: In 2023, hidden webpage prompts tricked Bing Chat into leaking user data.
Remoteli.io Bot Sabotage: Twitter inputs forced a bot to produce rogue responses, exposing vulnerabilities.
Conclusion The relentless growth of LLMs across cyber security, healthcare, finance, and retail amplifies the dangers of prompt injection attacks, which threaten data breaches and system failures. As Policy Puppetry exploits these vulnerabilities, cyber security professionals must lead with advanced threat monitoring, AI supply chain vetting, and zero-trust frameworks. Collaboration across industries is vital to forge resilient defences, ensuring LLMs’ transformative power is harnessed securely in this dynamic landscape.