Manage your AI risks | Defining Cybercrime
CyberInsights #88 - The NIST AI Risk Management Framework | The world's cybercrime definition problem
NIST’s AI Risk Management Framework (Jan 2023)
After ChatGPT, a risk management framework for AI is essential. There is scope for improvement though.
If you haven’t used ChatGPT yet, you are living under a rock.
Last week, I conducted a session on the use of ChatGPT in Governance, Risk and Compliance. To make my session well-rounded, I wanted to speak on the risks an mitigations of using ChatGPT as well. That is when I came across this standard by the NIST - The NIST Artificial Intelligence Risk Management Framework.
This framework considers some very interesting risk vectors. Consider this screenshot:
It considers the risks that AI can cause to people, organisations and ecosystems. They define a reliable AI system to consist of the following properties:
valid and reliable
safe
secure and resilient
accountable and transparent
explainable and interpretable
privacy-enhanced, and
fair with harmful bias managed
It’s a great start!
Take Action:
If your organisation is creating an AI system, use the 72 points mentioned in the NIST RMF as a guide to understand the risks of your AI system.
If you are consumers of AI systems, build a subset of this that applies to your organisation and use it evaluate before using any AI based system.
What is cybercrime, really?
If you think you know the answer, you are in for a surprise.
Wired calls it the ‘real cybercrime problem’.
Meanwhile, governments say that “Cyber divisions are worth more than aircraft carriers or nuclear weapons”.
The United Kingdom published a document “Responsible Cyber Power in Practice”. If you are interested, read the linked pdf.
On one hand, we speak of cyber weapons being used by state actors and self regulation of cyber weapons usage, while on the other hand, our laws defining cyber crime are still archaic.
Cybercrime can range from child pornography to “sharing of material online that’s “motivated by political, ideological, social, racial, ethnic, or religious hatred”.
In this piece, the Wired has highlighted the problem that we face.
The solution is simple. Andrew Crocker, who is a senior staff attorney at the EFF, says this:
“If ‘cybercrime’ is going to mean anything, it has to be specifically limited to crimes done to computer systems and networks using computer systems and networks,” Crocker says. “In other words, it has to be the kind of crime that could not exist if this technology did not exist. ‘Cybercrime’ can't just be any bad thing done using a computer.”
Take Action:
This is one of those news pieces that has a bigger impact than just to an individual or organisation. Read the articles linked here. Talk to your colleagues, talk to your government and lawmakers — let’s all agree on what ‘cybercrime’ should actually mean, across the globe.