Supply chain attack on BBC, BA & Boots || Your AI is lying to you
CyberInsights #97 - Payroll services provider compromise leads to breach || When your AI hallucinates
When your payroll service provider gets compromised
Securing the supply chain remains a key challenge.
Three big B’s in the UK - the BBC, British Airways & Boots had their payroll data stolen.
This is how it worked. Payroll for these corporations is outsource to a company called ‘Zellis’ that claims to be UK’s leading provider of payroll and HR solutions. Zellis uses a software called MOVEit for large file transfer.
A bug was discovered in the MOVEit software earlier this week. The bug was exploited. MOVEit has been tracking and reporting the incident on their website. It was a SQL injection vulnerability. Here is the CVE.
Since Zellis uses MOVEit, Zellis’ data got compromised. They have a rather terse press release on their website. Since the three B’s use Zellis, their data got compromised.
There could be a lot more to come. After all, UK’s leading provider would not have just 3 customers, right? We should wait and watch for another press release from Zellis
Take Action:
If you a Zellis or a MOVEit customer and have not received any communication from either party, reach out to them and understand if your data has been compromised. If yes, take appropriate action, including disclosing to your employees. Payroll data is PII and a tad sensitive to the employee.
If you do not have a vendor risk assessment process in place, set it up. Identify vendors to whom you send sensitive data, classify them as critical. Connect with the vendors and identify risks. (You can use Raven to do this. Plugging in some company promotion in a teachable moment 👺)
Plausible false information confidently given and trusted
AI hallucination is a real threat.
A Large Language Model (LLM); the algorithm on which ChatGPT is based runs on the basis of probability. It gives the most probable answer - not necessarily the truth. When an AI model does this, i.e. give you a plausible but false answer, we say that the AI model is hallucinating 👻. Fancy term to say that it is lying.
The bigger problem is that attackers can get your AI model to give you false libraries while hallucinating. Read this article to find out how what can be exploited. Or, if you are the TL;DR kind, here is the crux:
To prove their concept, the researchers created a scenario using ChatGPT 3.5 in which an attacker asked the platform for a question to solve a coding problem and ChatGPT responded with multiple packages, some of which did not exist — i.e., are not published in a legitimate package repository.
"When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place," the researchers wrote. "The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package."
If ChatGPT is fabricating code packages, attackers can use these hallucinations to spread malicious ones without using familiar techniques like typosquatting or masquerading, creating a "real" package that a developer might use if ChatGPT recommends it, the researchers said. In this way, that malicious code can find its way into a legitimate application or in a legitimate code repository, creating a major risk for the software supply chain.
Take Action:
Let me begin with the obvious — It is foolish to ask your dev teams to not use ChatGPT (or similar LLMs) for easing the coding problem. So, don’t attempt that.
Instead teach the dev teams to use these resources properly — like any good dual use technology.
Setup a process to identify and manage AI risks. Use the AI RMF that I mentioned in one of my previous newsletters: