I "Vibe Consulted" a Gen AI policy. Here are the details [PART 2]
#184 - Part 2 of the series on how I vibe consulted a Gen AI policy. This one is about the slowly improving prompts
The quest for GRC using Gen AI continues
Who would be a guinea pig for this policy creation? Clearly not a client.
The choice came down to either looking up data from a listed organization, creating a dummy organization or using your own organization as a guinea pig.
I chose the laziest option - using my own organization. I work for PentaQube Cyber Technologies, and we are already using Gen AI for some of our work, I thought this would be a perfect candidate for this experiment.
Warming Up: The crude prompt
I had no expectations when I asked the first prompt:
Create a GenAI usage policy for PentaQube Cyber Technologies (www.pentaqube.com)
The responses were, as expected very generic and could not be used at all.
The good part of this policy was that it contained two sections that I found useful:
Authorised Use Cases
Prohibited Use Cases
These could be a section in the final policy when I create it.
It also contained horrible statements like:
Always verify the privacy policies of third-party Gen AI platforms before use
Seriously? It is such statements that give GRC consultants a bad name!
Any information security professional worth his salt should know NOT to expect the user to do the heavy lifting. It’s consulting 101. Maybe, I should prompt Claude to ‘act like a consultant’ - role play prompting.
ChatGPT did not do any better.
ChatGPT 4.1 created statements like these:
All data input into Gen AI systems must comply with data governance standards and privacy regulations (e.g., GDPR, CCPA);
Sensitive or confidential business, employee, or client data must not be submitted to public AI services unless data- handling agreements are in place
Employees are responsible for flagging and mitigating any output that may perpetuate bias or discrimination,
It’s the same problem as Claude - expecting users to do the heavy lifting.
Result?
Good Gen AI usage policy ❌
Generic Word Salad ✅
Upping the Game: Role Prompting
Prompting is an art, not a science.
So, I read up on prompting and collected some resources on how to write a good prompt. You can see what I have been reading on my AIMS resource page on Github. There is a page dedicated to prompting that I have started to maintain.
So, the first improvement step is ‘Role Prompting’. You essentially ask the AI to play a role. What role? Cybersecurity consultant, of course!
So, here is the prompt that I tried:
You are an ace cybersecurity consultant. You have been tasked with creating a Gen AI usage policy for PentaQube. Find relevant details and create a policy for Gen AI usage.
See the use of the word ace? That’s me being smart!
The output was way better than the initial prompt.
The generic sentences like “Use Gen AI ethically and responsibly” are still there, but there is an interesting piece that I found:
Data Input:
Prohibited: Inputting Personally Identifiable Information (PII) of clients, employees, or other individuals into public Gen AI platforms is strictly prohibited.
Prohibited: Inputting sensitive business data, trade secrets, financial information, or any other confidential PentaQube information into public Gen AI platforms is strictly prohibited.
Permitted (with caution): For non-sensitive, publicly available information, Users should exercise caution and critically evaluate the output.
PentaQube will explore and potentially provide secure, internally managed Gen AI platforms for handling sensitive data in the future. Specific guidelines for these platforms will be provided separately.
This section is a vast improvement over the generic policy.
Also, the sections of the policy seem to be reasonable:
Data Input
Output review and verification
Content Generation
Code Generation
Training and Development
Account security
When I posed the same prompt to Claude 3.7, it gave me a very generic policy. However, because, I think, I used the words cybersecurity consultant, it listed all the things I should not put into Gen AI:
Prohibited Information: The following must never be input into GenAI systems:
Confidential client information
Personally identifiable information (PII)
PentaQube's proprietary code or intellectual property
Protected health information (PHI)
Financial records or sensitive business data
Authentication credentials
Claude 3.7 also had sections on security measures and incident response as well.
ChatGPT 4.1 and Grok 3 beta had a section for Accuracy, Bias, and Accountability.
Overall, a vast improvement over the first prompt, but still, no one would pay top dollar for this kind of policy.
The pivot… and plan for the next round
So, clearly, I had to get the Gen AI to do more. I realize that I am missing context. The Gen AI usage policy can be only as good as the context that it is give - 🧠 like a consultant asking questions or sending you a questionnaire.
For the next part, I have two strategies planned. 1️⃣ First, I will create a document about PentaQube and upload it to the LLM and ask it to generate. 2️⃣ Second, I will instruct the AI to not start creating unless it has satisfied itself that it has asked all the relevant questions.