The bad news about AI is that not only the “good guys” use it; cybercriminals also use it. Criminals invented doppelgangers for ChatGPT, like WormGPT and GhostGPT. The doppelganger can write fake emails and conjure up chatbots that seem real.
How do they do it? Using large language models allows them to craft messages that feel genuine to the recipient. You don’t have to speak the language to craft a genuine-sounding message; the AI does it for you.
The availability of public data on people, along with their pictures and voices, makes it easy to clone voices and personalize messages. It’s easy to make deepfakes with AI to trick employees into handing over company money or information without a second thought.
AI makes it easier to impersonate trusted sites like DocuSign. Cybercriminals upload malicious content to trusted sites and then send emails with a link to the malicious portion, making it appear that the content is coming from the trusted site.
The popularity of the QR code leads to quishing or QR code phishing. Disguising a malicious QR code as a login request or delivery notification in an email bypasses the usual spam filters. Clicking on the disguised QR code leads to another site that appears legitimate but instead harvests credentials.
Use AI to beef up your security systems to stay ahead of AI-enhanced threats. Use MFA or biometric authentication and train your employees about the newest threats to avoid falling victim to attacks. Visiting a fellow employee’s desk to ask if the email is legit always works.
Author: Kris Keppeler, a curious writer who finds technology fascinating. Follow her on X (Twitter) @KrisNarrates, on Medium.com @kriskeppeler, and on LinkedIn.