The word “free” has always tempted employees who are looking for an app or template to make their work easier. These days, combine “free” with “AI” and the lure is almost irresistible.
Since the release of ChatGPT in late 2022, free AI-themed apps have exploded. Unfortunately, some are created by threat actors. One of the latest examples, reported this week by Malwarebyes, claims to be an AI video editor, but really installs the Lumma Stealer malware.
Victims were lured by promises like “Create breathtaking videos in minutes,” “No special skills required – anyone can do it,” and “On September 1 we’re giving away 50 lifetime licenses to our AI editor!”
According to a report released last month by Slack, AI use in the enterprise is growing. Among those employees who are using AI applications, 81% said it has improved their productivity. That’s why some may be curious – or eager – to try a free AI app.
However, that same report notes that nearly 40% of respondents said their company has no AI usage guidelines. One result: Shadow AI, defined as the unapproved use of artificial intelligence-based applications.
CISOs need a strategy to cope. It starts with management deciding if it wants to allow the use of AI in the workplace at all.
No magic tricks
To stop employees from falling for phony AI apps, there are no magic tricks – it’s just standard awareness training for preventing installation of any unwanted application: Tell staff, “There’s a company rule: Don’t download unapproved applications (or the reverse: “Only download approved apps).”
If there isn’t a list of approved apps, there should be a rule that IT has to give approval for anything to be added to an employee’s computer that the company hasn’t already installed.
If it hasn’t already done so, IT also needs to configure whatever operating system the organization uses so only those with administrator accounts — and there should be very few employees with that access — can install applications.
“AI has spurred broad interest across all audiences, from cybercriminals looking to perfect their scams to everyday consumers interested in learning more and hacking their productivity with new AI-powered tools,” Pieter Arntz, a Malwarebytes intelligence researcher, told CSO in an email. “This onslaught of interest has sparked a flurry of AI-related scams, and I don’t see them stopping anytime soon.
“Most cybercriminals are focused on making money, and they’ll take advantage of any new cultural moment to dupe users. I’ve seen scams ranging from a free trial with a very shoddy product to straight-out malware downloads. I caution people to be wary of new, free tools and to use a browser extension that blocks malware and phishing.”
According to Malpedia, Lumma Stealer (also known as LummaC2Stealer) is an information stealer available through a malware-as-a-service model on Russian-speaking criminal forums since at least August, 2022. It primarily targets cryptocurrency wallets and two-factor authentication browser extensions, before ultimately stealing sensitive information from the victim’s machine. Once the targeted data is obtained, Malpedia notes, it is exfiltrated to a C2 (command and control) server via HTTP POST requests using the user agent “TeslaBrowser/5.5″.” The stealer also features a non-resident loader that is capable of delivering additional payloads via EXE, DLL, and PowerShell.
Lumma is often distributed via email campaigns, the Malwarebytes report says, but nothing stops threat actors from spreading it as a download for an AI editor, as they did in this example.
To stop infections, CISOs should implement Cybersecurity 101. That not only includes security awareness training, it also means making phishing-resistant multifactor authentication mandatory for all employees, and monitoring IT networks for suspicious behavior.
Infosec pros looking for signs of infection from this particular app should hunt for a file called “Edit-ProAI-Setup-newest_release.exe” for Windows, and “EditProAi_v.4.36.dmg” for macOS.