Using ChatGPT's powers for evil π¦ΉββοΈ | 300k Texas vehicle π crash reports compromised
#190 - OpenAI lists 10 case studies on how ChatGPT was put to malicious use | A compromised account leads to data on a 'Crash Records Information System' being compromised
10 case studies where ChatGPT was used for malicious purposes
The OpenAI report showcases trends in Social Engineering
A report by OpenAI highlights different ways in which ChatGPT was used for malicious purposes. Here are the 10 cases showcased [summary of the 46 page pdf]:
Fraudulently attempting to get jobs in the US (probably by North Korean actors)
Targeted βsneer reviewβ generation on social media (aligned to Chinaβs geo-strategic interests)
Bulk comment generation on social media to influence politics (from the Philippines)
Generating social media posts and biographies for online personas as a part of a covert influence ops (likely Chinese Origin focusing on social media activities in US and EU)
Pretend to be an independent German news website & generate content to influence German elections in 2025 (Russian Origin)
Developing & refining Windows Malware. Generate a Go based multi-stage malware campaign (Russian Origin)
Writing scripts for security attacks and penetration (password brute forcing, port scanning, etc.)
Generating polarizing social media content that supports both sides of divisive topics with the objective to increase polarization (Chinese Origin)
Generating comments in English and Spanish (translating from Persian)
Fake recruitment tasks and messages (Cambodian Origin)
Itβs evident that the evil use of LLMs is for social media content generation (fake news and comments) and propagating cyber crime (social engineering and technical assistance for attacks)
There are similar articles by Google and Anthropic as well.
Take Action:
Cybersecurity Professionals π΅πΌββοΈ - You will eventually do AI system impact assessment. If you havenβt started already, this is the time to start. These actions taken by threat actors showcase the type of misuse that AI systems can be subject to. Use these examples as a part of your impact assessment process
Cyber Insurers π©π»βπΌ - When underwriting, ask AI specific questions. AI impacts might lead to various liability claims.
Vehicle crash π₯ data from the Texas Department of Transportation compromised
A treasure trove of 300k vehicle crash data with personally identifiable data taken from a compromised account
Itβs a seemingly routine piece of news. Yet another data breach!
As I read the details, the possibilities dawned on me. The Register covers these possibilities here. The breach is concerning not only because scammers can misuse the stolen personal data, but also because the information can be used for many other harmful activities such as:
Fake insurance claims
More plausible phishing attacks
Targeted vishing attacks
Using registered vehicle and address details for creating fake personas, etc.
A data breach with not just PII, but data about vehicles, ownership, accidents, claims, etc. Itβs a scammers delight.
Take Action:
Users π€ - If you are in the Texas region and are affected by this breach, the TxDOT might reach out to you. You should inform your insurers of this data breach to avoid false claims and frauds in your name.
Insurers π©π»βπΌ - Not just cyber insurers, but also motor insurers - be aware of the incident and have additional vigilance for potential frauds