Reducing Ransom Payments | No-holds-barred Gen AI?
CyberInsights #139 - UK Cyber Insurers pledge to reduce ransom payments | Criminals are jailbreaking LLM models to get the data they want
Consider not paying ransom: UK NCSC
UK Cyber insurers to help customers NOT pay ransom
UK’s NCSC (National Cyber Security Centre) together with cyber insurance companies aim to reduce the profitability of ransomware operators.
Ever since I’ve learnt the ropes of cyber insurance, I’ve advocated cyber insurers to not cover ransomware as a part of their cyber policy. I’ve always been given a sympathetic ear and nods of understanding, but no insurer has been willing to bite the bullet and remove ransomware coverage from their product - the business might move to the competitor.
Ransomware attacks in the UK have grown phenomenally. Grown enough for the NCSC to work together with insurers and ‘encourage’ victims to not pay ransom. [LINK]. They have also released a “Guidance for organisations considering payment in ransomware incidents” [LINK]
There are agencies that provide resources and support to ransomware victims. The two most popular ones are:
Take Action:
Insurers and Underwriters: It’s a tough business call, but you can probably differentiate your product by removing ransomware coverage from your products. If you cannot do that, be clear in your policy wordings that you will consider ransomware payments as the last resort. Setup your processes with your incident response partner that take all possible options into consideration before reaching the point of ransomware payments.
Cybersecurity professionals: First, remember to not victim shame. Anyone can be a victim of ransomware. Educate the leadership in your organisation about ransomware and have a playbook for restoration and recovery without resorting to ransom payment.
Gen AI hacking is more than just prompt engineering
Welcome to the brave new world of jailbreaking-as-a-service
LLMs will not provide certain information. The above screen grab from ChatGPT lists the topics it won’t provide information on.
The hacker in you might already be bursting with a thousand prompts that you might want to try, to see if ChatGPT can come up with something interesting.
You don’t have to work on it yourself. Welcome to jailbroken Gen AI services.
This article [LINK] provides details of how illegitimate Gen AI is being used by criminals.
From offering custom GPTs of OpenAI’s ChatGPT with some evil prompt engineering to providing a dataset to train open source LLMs to write phishing mails, criminals are using Gen AI in various ways.
The article has the screenshot of an advertisement that proudly announces - “8.5 TB dataset with millions of sample phishing mails”.
NIST has come up with a draft standard on risks specific to Gen AI. Read the draft standard here [LINK].
Take Action:
Gen AI based attack vectors are on the rise. Know more about how they can be hacked. NIST has identified specific risks to Gen AI LLMs in the draft 600-1. It makes for an interesting read.
As a cybersecurity professional, understand this attack vector. If your organisation is building or has already built an LLM based Gen AI, assess the risks and find ways to mitigate them.