Model Namespace Reuse Flaw Hijacks AI Models on Google and Microsoft Platforms

A new security vulnerability called ‘Model Namespace Reuse’ allows attackers to hijack AI models on Google, Microsoft, and open-source platforms. Discover how attackers can secretly replace trusted models and what can be done to stop it.
A new security vulnerability has been discovered that could allow attackers to hijack popular AI models and infect systems on major platforms like Google’s Vertex AI and Microsoft’s Azure AI Foundry. The research, conducted by the Unit 42 team at Palo Alto Networks, revealed a critical flaw they call “Model Namespace Reuse.”
For your information, AI models are often identified by a simple naming convention like Author/ModelName. This name, or “namespace,” is how developers reference models, much like a website address. This simple naming convention, while convenient, can be exploited. The research shows that when a developer deletes their account or transfers ownership of a model on the popular platform Hugging Face, that model’s name becomes available for anyone to claim.
This simple yet highly effective attack involves a malicious actor registering a now-available model name and uploading a new, harmful version of the model in its place. For example, if a model named DentalAI/toothfAIry was deleted, an attacker could recreate the name and insert a malicious version.
Because many developers’ programs are set to automatically pull models by name alone, their systems would unknowingly download the malicious version instead of the original, trusted one, providing the attacker a backdoor into the system, and allowing them to gain control over the affected device.
Unit 42 team demonstrated this by taking over a model name on Hugging Face that was still being used by Google’s Vertex AI and Microsoft’s Azure AI Foundry. Through this method, they could gain remote access to the platforms. The team responsibly disclosed their findings to both Google and Microsoft, who have since taken steps to address the issue.
This discovery proves that trusting AI models based solely on their names is not enough to guarantee their security, as well as highlights a widespread problem in the AI community. This flaw affects not only large platforms but also thousands of open-source projects that rely on the same naming system.
To stay safe, researchers suggest developers should “pin” a model to a specific, verified version to prevent their code from automatically pulling any new updates. Another solution is to download and store models in a trusted, internal location after they have been thoroughly checked for any issues. This helps to eliminate the risk of upstream changes. Ultimately, securing the AI supply chain requires everyone from platform providers to individual developers to be more vigilant about verifying the models they use.
Adding to the conversation, Garrett Calpouzos, Principal Security Researcher at Sonatype, shared his perspective exclusively with Hackread.com regarding this discovery.
Calpouzos explains that “Model Namespace Reuse isn’t a net-new risk, it’s essentially repo-jacking by another name.” He notes that this is a known attack vector in other software ecosystems, which is why some platforms have introduced “security-holding” packages to prevent attackers from reclaiming deleted names.
For businesses, he advises that “names aren’t provenance,” meaning that a model’s name alone doesn’t prove its origin or safety. He recommends that organisations “pin to an immutable revision,” which means locking a model to a specific, unchangeable version. By verifying these unique identifiers during a build, you can either “block the attack outright or detect it immediately.”
(Image by Alexandra_Koch from Pixabay)
HackRead