Skip to content

Artificial Intelligence-induced malpractices pose a potential risk to supply chains, with the emergence of the contaminating tactic known as "slopsquatting."

AI practitioners issue alerts about potential hijacking of artificial intelligence hallucinations in "slopsquatting" exploits

AI-driven hallucinations pose a looming risk in supply chains, dubbed as "slopsquatting,"...
AI-driven hallucinations pose a looming risk in supply chains, dubbed as "slopsquatting," potentially disrupting inventory management and quality control.

Artificial Intelligence-induced malpractices pose a potential risk to supply chains, with the emergence of the contaminating tactic known as "slopsquatting."

In a recent study, researchers from Virginia Tech and the universities of Oklahoma and Texas have revealed a concerning new type of supply chain attack called "Slopsquatting." This attack is a play on "typosquatting," a tactic used for phishing campaigns, where threat actors register slightly misspelled versions of legitimate domains.

The study tested 16 code-generation Large Language Models (LLMs) and prompted them to generate 576,000 Python and JavaScript code samples. The results were startling: the amount of unique hallucinated package names amounted to 205,000. These hallucinated packages were semantically convincing, making it difficult for developers to spot them by sight.

The study found that 58% of the hallucinated packages were repeated more than once, while 39% never reappeared. This suggests that the likelihood of a hallucinated package being created increases with the "temperature" of the LLM, meaning if the LLM is set to create more random responses.

Slopsquatting poses a particular risk for developers who practice "vibe coding" and are more likely to blindly trust AI content. If a single hallucinated package becomes widely recommended by AI tools, the potential for widespread compromise is real. In Slopsquatting, a threat actor could prompt an LLM to create code, and if the code contains open source software packages that don't exist, the threat actor could publish a fake package to an official repository with the same details as the hallucinated one and insert malicious code into it.

The best way to mitigate slopsquatting is for developers to proactively monitor every dependency and use tools to vet dependencies before adding them to projects. Many developers trust the output of AI tools without rigorous validation, providing a wide open window of opportunity for attackers. The study's findings indicate a potential increase in security risks associated with AI-generated code.

The phenomenon of "slopsquatting" was first described by Python Software Foundation (PSF) developer in residence, Seth Larson. While the exact origin of the term is unclear, the study's findings underscore the importance of vigilance in the face of increasingly sophisticated attacks. As AI tools become more prevalent in software development, it's crucial for developers to stay informed and take proactive measures to protect their projects from potential threats.

Read also:

Latest