Addressing the Cybersecurity Skills Gap | Addressing AI security risks
CyberInsights #92 - World Economic Forum's three areas to address the skill gap | We cannot stop talking of AI risks...
The hunt for the elusive cybersecurity talents
The problem is so large that the world economic forum is writing about it.
It’s not clickbait. The World Economic Forum recognises the need for cybersecurity talent. They have released an article that speaks of 3 areas to consider for bridging the skill gap.
The WEF wants you to address the misperception that cybersecurity is a technical skill. The cybersecurity industry should do more to communicate different career paths in cybersecurity and highlight ones that do not require IT security or technical skills.
Widening the talent pool — it appears to be a platitude at first, but when you delve into it, you see that the WEF wants you to have clear definition of roles, job qualities and skills. The NIST 800-181 which introduces the NICE framework is a good starting point to do this. Also, WEF wants to introduce more diversity. More women in the cyber work force! The She Leads Tech stuff should get into cyber too.
Retaining cyber talent — addressing the burnout concern. Cyber work force is being burned out. Take care of your people and your people will take care of you. Read this post about security analyst burnout.
Take Action:
Cyber Talent management is a key responsibility of the cybersecurity executive. Talk to your team. Build a proper roadmap. Clarify and simplify roles and responsibilities. Don’t seek to hire a superman.
This is for all the leadership of cybersecurity — get your hands dirty in HR if you want to hire, nurture and retain cyber talent.
More about AI Security Risks…
AI risks are getting all the talk-time. We still need to talk more of AI risks.
No one knows what happens at the ‘singularity’ — the centre of a black hole. No one knows the security risks when AI is let loose.
It’s not like I have not spoken about AI risks. I wrote about it just a few months back about the NIST AI Risk Management Framework (RMF)
A paper released on Adversarial Machine Learning and Cybersecurity: Risks, Challenges & Legal Implications, talks about four areas of AI risks:
Extending traditional cybersecurity for AI vulnerabilities. The NIST AI RMF is a good way to do it.
Improving information sharing and organisational mindsets
Clarifying the legal status of AI vulnerabilities
Supporting effective research to improve AI security
It’s a complex problem and needs some level of brainstorming. For example: How do you patch an AI vulnerability?
Take Action:
Same ol’, same ol’ — Read the NIST AI RMF. Incorporate AI risk assessment in your traditional risk assessment framework.