Vibe Coded Ransomware | AI companies breached | Public exposed AI servers
#201 - Prompting AI to generate ransomware 🧱| Breach at AI Chatbot 💬 maker | Exposed Ollama 🦙 servers
There’s an AI 🤖 for that cyber attack
From real time prompt-coded ransomware, to exposed LLM servers, we have it all.
You’ll be excused for thinking this is an AI newsletter and not a cybersecurity one, given the number of times I talk about AI. There’s just so much happening in the world of AI and cybersecurity that it seems the other big news items get lost.
For example, I am not writing about the TransUnion Breach that stole the data of 4.4 million customers. Yawn! TransUnion is a credit bureau. Considering that they give you your credit score, they have full access to your personal financial information. But, we will stick to the flavor of the season - AI.
1️⃣ There’s a new type of ransomware out there. It’s called PromptLock. It’s not your regular run-of-the-mill ransomware. It’s most likely the first ransomware to use OpenAI’s LLM as its engine. It creates new scripts on the fly.
There are pre-defined prompt templates that are called upon by the ransomware to generate Lua scripts. The prompts allow it to hunt for and exfiltrate files and then encrypt them. The security researcher who discovered it has a LinkedIn post and a BlueSky post too.
I’m wondering if the ransomware developer had a CEO 👩🏻💼 who sat up one day and said, “Let’s put AI in that..” 😀
2️⃣ Moving on from vibe coded ransomware. In a rush to deploy AI, people are forgetting basic security controls to be implemented. Cisco’s security research team has discovered over 1000 Ollama servers exposed to the internet without a care in the world.
Ollama is a piece of software that allows you to download and run LLMs locally. You install Ollama on your machine, pull a model and start chatting with it. Easy, really! If you have not tried it, you should. Here is a link to the Ollama site. Only that Ollama is meant to run on your local system or local network. When you want to run large models (> 20bn parameters), you want a server class machine. This is where, I guess people end up hosting their Ollama server on the cloud.
Here is a paragraph from the post that hits home:
“Tziakouris concluded the findings of the Cisco study “highlight a widespread neglect of fundamental security practices such as access control, authentication, and network isolation in the deployment of AI systems.” As is often the case when organizations rush to adopt the new hotness, frequently without informing IT departments because they don’t want to be told to slow down and do security right.”
In a rush to deploy AI systems because some CEO 👩🏻💼 said “Let’s put AI in that”, security gets neglected.
3️⃣ AI first companies are mushrooming around us. They promise everything in AI. They want to access your data, create embeddings and do all things AI with it. This company wanted to convert SalesForce customer interactions into leads. However, they had a breach that not only compromised customer data, it also compromised API tokens to various other services. Brian Krebs has a detailed post about this.
So, you see, it’s AI all the way in cybersecurity these days.
Take Action:
Check if your ransomware detectors in your EDRs and XDRs can identify prompts as malicious files that can generate code on the fly.
If you are a developer (vibe coders included) and want to succumb to the pressure of pushing AI apps in production, remember that there is a big gap between “it works on my system” and “it works securely on the server”.
Remember that some of the open source AI aggregation frameworks are not ‘production’ capable.
If you are an AI startup, remember to build security from the beginning or you will run into all sorts of sticky situations.