So you think you have cyber insurance? | AI Index Report
CyberInsights #136 - Cyber Risks are inadequately covered by policies | The seventh edition of the Stanford AI Index report is out
Study shows an average of US$ 27.3 million gap in uncovered losses per incident
It’s a small sample size, but the output is believable.
Cyber insurance is comparatively new.The coverages offered by various insurers are not yet standardised.
Buyers of cyber insurance do not know what they are buying. They do not know the coverages they want.
Combine the two and you get the ‘cyber insurance gap’.
This paper [LINK] by Cye Security [LINK] validates the above theory.
The report is just about 10 pages. The sample size is 101 incidents. It’s an interesting read.
Take Action:
For the insurance buyer, there are two tasks:
Review your top risks - what risks are you most worried about materialising in spite of mitigating them? Make a list of the risks that might cause the biggest financial loss, should they materialise. You might have to do some form of Cyber Risk Quantification for this.
Review your insurance policy - Are these risks covered? Get into a conversation with your broker and insurer and check if all your risks are covered. Identify gaps and find out how you can get them added to the policy.
For the underwriter:
It’s quite normal to do your own assessment of the risk that you are underwriting. However, if the buyer has done a thorough risk assessment (not a compliance check), consider the top risks that are presented to you and understand why the organisation believes that they are the biggest risks.
If you think you can trust the process of risk assessment by the buyer, consider if you want to add the covers expected (depending on treaties, of course).
How is AI growth progressing in society?
From the expected “regulations in AI are increasing” to analysing that industry has contributed more to AI models than academia, this report by Stanford provides an exciting read.
Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark,
“The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
This is the seventh edition of the AI index report. I have stumbled upon it for the first time - clearly late to the AI party.
The report is an interesting read. This report [LINK] is not as small as the 10 page one on cyber insurance, but more than 500 pages. You don’t have to read through the details if you are not interested - the first two pages give you the summary. The details are interesting though.
The report has 9 chapters, ranging from Responsible AI to Public Opinion. Read the section that you are interested in.
Here are the top 10 takeaways according to the report:
AI beats humans on some tasks, but not all (we knew we were better than those pesky LLMs..)
Industry continues to dominate frontier AI tech (more models from industry than from academia. Industry is spending big bucks on AI)
The United States leads China, the EU, and the U.K. as the leading source of top AI models. (where the most research seems to be happening)
Robust and standardized evaluations for LLM responsibility are seriously lacking (nobody seems to know how to evaluate a given LLM)
Generative AI investment skyrockets (more money going into AI research)
The data is in: AI makes workers more productive and leads to higher quality work (More productive, I can agree, but I find it difficult to believe that the work is higher quality)
Scientific progress accelerates even further, thanks to AI (AI will lead to faster scientific research and breakthroughs)
The number of AI regulations in the United States sharply increases (globally, I guess)
People across the globe are more cognizant of AI’s potential impact—and more nervous (Of course I am nervous - an AI might have written this in half the time that I am taking)
Take Action:
This is more of an informational article. Read it to understand how AI is progressing in society.