The rise of large language models (LLMs) in code generation tools has introduced new risks to the software supply chain. These AI-powered assistants, like LLMs in general, have a tendency to “hallucinate,” suggesting code that incorporates non-existent software packages. Researchers have found that about 5.2% of package suggestions from commercial models don’t exist, compared to 21.7% from open-source or openly available models.
Miscreants are hijacking these hallucinated package names to create malicious software packages, which can be uploaded to package registries like PyPI or npm for distribution. When an AI code assistant re-hallucinates the co-opted name, it can lead to installing malware. Security firm Socket has discovered that exploited packages often have realistic-looking READMEs, fake GitHub repos, and sketchy blogs to make them seem authentic.
The problem is exacerbated by the fact that AI-generated summaries from search engines like Google often praise these malicious packages, giving a false sense of legitimacy. This “slopsquatting” represents a form of typosquatting, where variations or misspellings of common terms are used to dupe people.
To mitigate this risk, developers should double-check LLM outputs against reality before installing them. The Python Software Foundation is working on making package abuse more difficult, and organizations can mirror PyPI within their own organizations for more control over available packages. Users must be vigilant in verifying the authenticity of software packages they install to avoid falling victim to these AI-powered supply chain risks.
Source: https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain