Too much security? || The good and bad of ML
CyberInsights #80 - Cybersecurity pitfalls and their mitigation || Two sides of ML
NIST handout overturning 6 cybersecurity pitfalls
Users are not stupid & you have too much security ⁉️
NIST released a handout titled “Users are not stupid: 6 cybersecurity pitfalls overturned”. You can download it from here.
This 2 page pdf is succinctly wise.
It talks of these pitfalls:
Assuming Users are Stupid 🤪
Not Tailoring Cybersecurity communication 🪡
Unintentionally creating insider threats due to poor usability
Having too much security ‼️
Using punitive measures or negative messaging to get users to comply
Not considering user feedback or user centric measures of effectiveness
It also suggests 10 ways to overcome these pitfalls.
Take Action:
It appears deceptively simple at first — but ponder each point a little more and you realise its deeper than you initially thought. Debate each pitfall with your cybersecurity teams and identify places where you might be succumbing to these pitfalls. Correct as many as you can.
This simple paper, if followed properly, has the potential to reduce your people risks drastically!
Can machine learning detect zero days? Can it be hacked?
The answer to both is yes. ML is a double edged sword.
How do you hack an ML based system? — you train it.
Most new technology is dual use.
When you read this article by Bruce Schneier, it will begin to dawn on you that ML will be hacked, not by sexy new mathematical attacks, but by simple ones like incorrect labelling of training data by humans.
What if an attacker can social engineer a group of ML trainers to mislabel training data? Can I hack the ML system to work according to my interests?
You could hack ChatGPT, mind you.
A recently published survey paper showcases that ML algorithms can be used for detecting zero day attacks. If someone were to build the system, can hackers train it to not detect zero days?
Take Action:
If you are using ML in any of your tools, do a threat modelling exercise to see how adversarial ML tactics can be used. Incorporate adversarial ML attack vectors like data poisoning & AI evasion into your threat model.