Your AI chats are Google search results now | Vibe Coding Security
#200 - Google indexes saved and shared ChatGPT chats | Can you even secure vibe coded stuff?
Your chats with your AI BFF aren’t private
But you knew, that.
AI security seems to feature in my highlights often these days. Now that everyone is using some AI LLM chatbot, it’s time you read this tongue in cheek opinion piece by Steven J. Vaughan-Nichols.
Before you read the piece, let me prep you up with some good old fashioned bullet points:
AI LLM service providers are legally required to retain your chats. These include deleted chats, API access, everything.
When you save a chat (chat history - that tasty little sidebar where you go to ‘continue’ where you left off) and share it with someone, LLMs allow search engines to index them.
You’ve agreed to it in the terms and conditions.
Take Action:
Chat with your AI bot like you are having a conversation on X or Reddit. 📢
Can you vibe code secure apps?
You have to prompt your LLM correctly.
Vibe coding is here to stay. It’s not a passing phase. With it, come the inevitable security challenges. There are other technical challenges as well, like this one that almost DoS’d their own databases, but security remains the main concern.
This post on DarkReading speaks of the vibe coding challenge, but stops short of giving a solution, except to ‘put security first’.
Here are some tips for vibe coders:
A fully functioning web app from a single prompt is a pipe dream. Don’t believe it. Vibe code smaller manageable parts. For ex. “Create a drag and drop div on this page to enable the users to upload files”
Start with the complete design before starting your vibe coding session. Design as you create introduces the most security bugs.
Use AI to check for bugs at every stage.
Use AI to build and run test cases for security.
And most importantly, make it a point to understand what every line of code generated by AI does!