How the new dirty word in AI cybercrime, Slopsquatting, threatens your business

We all know AI hallucinates. I once spent fifteen minutes trying to find more information on a specific date-related topic, only to discover that AI had made it up. My trust in AI hit rock bottom that day.

As if the hallucinations weren’t bad enough, now cybercriminals are exploiting them to trick coders. It’s called slopsquatting. How did they get that name?

Slop refers to inferior content generated by AI, and squatting comes from typosquatting. Cybercriminals register misspelled versions of domain names to take advantage of users’ typing errors, referred to as typosquatting.

Slopsquatting, although it’s been around for a bit, is increasing in frequency because the use of AI is exploding. Major AI companies, such as ChatGPT, offer coding assistants. Your coding assistant speeds things up, but can go off the rails.

When a coding assistant hallucinates a package name that sounds genuine, the entire build comes to a halt upon install because the package isn’t real. Cybercriminals exploit this by registering package names that contain malware.

Coding languages like Python utilize public repositories for pre-written code snippets. Now, if you’re not paying attention, that malware-loaded package gets used as the system sees it as authentic.

To reduce security risks, ask your assistant to verify that package names are valid. This strategy works over half the time. Test the code in a secure environment.

Automate scanning for weaknesses. Assign a human to review new or unfamiliar packages—especially those processes relying heavily on AI.

Author: Kris Keppeler, a curious writer who finds technology fascinating. Follow her on X (Twitter) @KrisNarrates, on Medium.com @kriskeppeler, and LinkedIn.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.