A machine to make deepfakes?Tick | Lockbit? no longer!
CyberInsights #130 - When you can create a deepfake with simple prompts | The Lockbit hacking group taken down in a global operation 'Cronos'
A new Video Creation tool by OpenAI can create videos based on prompts
What could possibly go wrong?
OpenAI released a new AI tool - Sora. Sora can generate videos from prompts. [LINK]. Tread carefully, Sam Altman.
The above image is created by the AI model built into Substack. Convincing, eh? But you see the point that I am trying to make? Fake images cause enough issues without the next step of fake videos based on prompts.
OpenAI says that they will follow the C2PA specifications for the videos that are created by Sora. C2PA is a standard for attaching provenance (the history of the ownership of given media) to media. For those of you interested, here is a link to the C2PA site. [LINK]. In short, C2PA puts a little icon at the top right corner of the media - image or video. On clicking the icon, you can see the provenance of the media. If the provenance has been tampered with or the origin is unknown, the icon changes to yellow.
Halfway across the C2PA introduction video, I got to hear this: “Trust Decisions always remain the choice of an informed consumer”
Gulp! Leaving trust decisions to the ‘informed consumer’ is scary.
Imagine someone watching a particularly inflammatory political video. No-one would focus on the little ‘i’ icon on the top right corner of the video. The image would be forwarded across instant messengers and watched by millions. It would not matter that the icon is yellow. They would not even realise that they are making a ‘Trust Decision’.
OpenAI also says that they will continue to follow the policy of blocking prompts that “request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.” We have seen that fail with creative prompt writing.
I am not confident that OpenAI is putting in the same time, energy and grey matter in looking at the safety and security aspect of their creations as they are putting into their core product.
Take Action:
Understand the risks of Gen AI - text, images and video. If you are in an ecosystem that uses or plans to use the different Gen AI, conduct a thorough assessment of the risks. Use frameworks like NIST AI RMF. I wrote about it here:
If you are in policy making, move faster!! Get the policies discussed and approved and regulate the Gen AI industry.
If you are a cyber insurer, ask questions related to the liabilities of using Gen AI. They can be huge.
“Ransonware as a service” provider Lockbit taken down
It marks a significant blow to ransomware operations globally
Ransomware payments in 2023 exceeded US$ 1.1 billion. [LINK]
Then, in true Narcos style, the UK police along with the police of 10 nations infiltrated and took down the notorious cartel.
This article on the Wired gives us a bit of details [LINK]. I am eagerly awaiting for a detailed article, then the book and the movie :)
To understand the scale of Lockbit operations; it owned about 11,000 domains and was responsible for about 25% of the ransomware attacks. A cool US$ 250 million business.
Take Action:
Follow the news. Wait for the decryption key to be made public and thank the authorities! Cybersecurity professionals might have a bit of respite from the shenanigans of Lockbit.
The Lockbit takedown has to have resulted in a lot of celebrating among the teams that carried it out and well beyond that.
Yes, I hope it will form a template for dealing with ransomware groups in the future!