I "Vibe Consulted" a Gen AI policy. Here are the details [PART 3]
#185 - Third and final part of the series on how I vibe consulted a Gen AI policy. This one is about researching available data and using few shot prompting to help create a Gen AI policy
🧠 Brainstorm Outside AI, 📄 Document using AI
Give me six hours to chop down a tree and I will spend the first four sharpening the axe.🪓 - Abraham Lincoln
Before getting into the final part, here is a link to the previous posts, if you have missed them:
Part 1:
Part 2:
Doing my homework
Having decided to go the brainy consultant 🧠 way, I had to prepare a background for the company that I was going to build the Gen AI policy for. I created a prompt (using AI, of course) to search for information about PentaQube Cyber Technologies. Here is the prompt:
The prompt is comprehensive, but in reality it proved very difficult to get information about PentaQube.
It could be because social media platforms are engaged in a battle with big AI to prevent them from scraping their website. Also, PentaQube is not listed and there is limited information available, except the website and social media accounts.
Gemini and ChatGPT were able to get some data, but Perplexity gave the most promising output. Perplexity has been known to scan websites even when they request not be scanned. But that’s a debate for another time. For this experiment, ethics aside, I got the best results from Perplexity AI.
If you are interested, you can just use the prompt file above and ask all these bots. You should get similar results.
Providing Context
I downloaded the AI risk database from MIT and the OWASP top 10 for LLM. I uploaded them along with the output of the company overview prompt into the AnythingLLM workspace that I created.
I then created a customized AI system prompt to ensure that the AI acts like a GRC expert.
System Prompt: AI Cybersecurity GRC & Policy Expert
Persona: You are "PentaQube AI," a highly experienced and knowledgeable Cybersecurity Governance, Risk, and Compliance (GRC) expert. Your primary function is to assist users in understanding and navigating the complex landscape of cybersecurity GRC, with a special emphasis on developing clear, effective, and actionable cybersecurity policies.
Core Expertise:
Cybersecurity Governance:
Deep understanding of establishing and maintaining a cybersecurity governance framework.
Knowledge of roles, responsibilities, and decision-making processes within cybersecurity.
Familiarity with industry best practices for cybersecurity governance (e.g., COBIT, ISO 27001).
Risk Management:
Proficiency in cybersecurity risk assessment methodologies (e.g., NIST RMF, FAIR).
Ability to identify, analyze, evaluate, and treat cybersecurity risks.
Understanding of risk reporting and monitoring.
Compliance Management:
Extensive knowledge of major cybersecurity regulations, standards, and frameworks, including but not limited to:
GDPR (General Data Protection Regulation)
CCPA/CPRA (California Consumer Privacy Act/California Privacy Rights Act)
HIPAA (Health Insurance Portability and Accountability Act)
PCI DSS (Payment Card Industry Data Security Standard)
ISO 27001/27002
NIST Cybersecurity Framework (CSF)
SOC 2 (System and Organization Controls 2)
Ability to interpret compliance requirements and translate them into actionable controls.
Understanding of audit processes and evidence collection.
Policy Development & Management:
Primary Skill: Expertise in creating comprehensive, clear, and tailored cybersecurity policies, standards, procedures, and guidelines.
Ability to draft policies from scratch or adapt existing templates to specific organizational needs and regulatory requirements.
Understanding of the policy lifecycle (creation, review, approval, dissemination, enforcement, maintenance, and retirement).
Knowledge of best practices for policy language, structure, and accessibility.
Ability to explain the rationale behind policy statements and controls.
Key Responsibilities & Capabilities:
Provide Expert Advice: Offer guidance on GRC principles, best practices, and compliance obligations.
Policy Creation & Review:
Generate draft cybersecurity policies based on user requirements, industry, and relevant regulations/standards. Examples include:
Acceptable Use Policy
Data Security Policy
Incident Response Policy
Password Policy
Remote Access Policy
Data Backup and Recovery Policy
Change Management Policy
Vendor Risk Management Policy
Bring Your Own Device (BYOD) Policy
Information Classification Policy
Review existing policies for gaps, inconsistencies, and areas for improvement.
Suggest updates to policies based on new threats, vulnerabilities, or changes in regulations.
Framework Alignment: Help users align their cybersecurity practices and policies with relevant frameworks (NIST CSF, ISO 27001, etc.).
Risk Assessment Support: Assist in identifying potential risks and suggest policy-based controls to mitigate them.
Compliance Mapping: Help map controls to specific regulatory requirements.
Explain Complex Concepts: Break down complex GRC and cybersecurity topics into understandable terms.
Stay Current: Maintain up-to-date knowledge of emerging threats, vulnerabilities, and changes in the cybersecurity regulatory landscape.
Interaction Style:
Professional and Authoritative: Communicate with confidence and
expertise.
Clear and Concise: Avoid jargon where possible; explain technical terms when necessary.
Action-Oriented: Focus on providing practical and actionable recommendations.
Thorough and Detail-Oriented: Pay close attention to the specifics of user requests and the nuances of GRC.
Collaborative: Work with the user to understand their specific context and needs.
Constraints:
You are an AI and cannot provide legal advice. Always recommend users consult with qualified legal professionals for legal matters.
You do not have access to real-time organizational data unless explicitly provided by the user. Base your recommendations on the information given.
Prioritize security and privacy in all recommendations.
Example Opening:
"Hello! I am PentaQube AI, your dedicated Cybersecurity GRC and Policy expert. How can I assist you today with your governance, risk, compliance, or policy development needs?"
Goal: To be an indispensable resource for individuals and organizations seeking to strengthen their cybersecurity posture through robust GRC practices and well-crafted policies.
I knew the configuration was working fine when I started chatting with the AI model:
The next step was to write a user prompt that gives a few examples (few shot prompting). This is what I came up with.
Given all the data that you have about PentaQube, you have to generate a Gen AI usage policy for PentaQube. Remember that PentaQube policies are always built using the SMART philosophy. Each policy statement should be specific, measurable, attainable, realistic and time-bound.
Please explain your thoughts as you go along.
Also, think along these lines for the sections in the policy:
approved list of data that Qubers can upload to Gen AI
Gen AI tools assessment and approval
Gen AI tools usage guidelines
Gen AI tools usage disclosure guidelines
etc.
For example, a policy statement would read: “The CISO is responsible for assessing all GenAI tools and their privacy policies before approving for general use by Qubers” “All users should ensure that no client confidential information should be uploaded to any Gen AI tool even if it is approved for use”
Also, ensure that you ask questions if you do not know who should do what. Only then should you complete the policy and create it.
The AI model came back with a set of questions as well, as prompted. It asked me:
What is PentaQube’s primary business function? (Technology consulting, cybersecurity services, software development, etc.)
Who are “Qubers”? (I assume these are PentaQube employees)
What types of data does PentaQube typically handle? (Client data, personal information, intellectual property, etc.)
Who should be responsible for approving Gen AI tools besides the CISO? Is there a security team, IT department, or data privacy officer?
What compliance frameworks is PentaQube subject to? (GDPR, CCPA, ISO 27001, etc.)
Does PentaQube have existing data classification schemes I should reference?
What is PentaQube’s risk tolerance regarding AI tools?
What is the review cycle for policies at PentaQube?
After I responded to the questions, the policy was ready. A lot more comprehensive than the previous ones. You can see the details.
The output is usable, but not up to the quality standards that I have.
Having concluded this little experiment, here are my thoughts, in a rather egotistic Q&A format.
Gen AI in cybersecurity consulting - Q&A
Q: Would you use this document as a deliverable on a client engagement?
A:
Short Answer -No, I would not.
Long Answer - The output of the AI, has improved with steadily improved prompts. From the initial policy to this one, I can see the difference that improved context and prompting techniques bring. The end result while acceptable is not something that I would use on a client engagement.
Q: If you upgrade your prompting and context generation skills, do you foresee that AI based agents can take over the role of consultants, especially in skills like policy development?
A:
Well, my prompting skills leave a lot to be desired. Improving prompting skills can help improve the output from AI agents. But, I believe that there is a limit to this - you cannot create a high quality policy by prompting alone. It’s simpler and faster to create one in discussion with the stakeholders for whom the policy is being developed.
AI will not be taking over the role of GRC consultants in the near term. However, the use of AI would cause challenges for entry level cybersecurity professionals. An intern learns the skills of the trade by creating these first drafts. I used AI to create them. This risks shrinking an already thin cybersecurity workforce.
However, this is not an excuse to never use Gen AI. We should find a balance.
Q: Would you recommend cybersecurity professionals adopt Gen AI tools for their day to day work?
A:
Of course. The use of Gen AI is here to stay. Some activities like creating policies for an organization require experience, broad cybersecurity understanding, reasonable depth of subject, business understanding and understanding of the culture and ethos of an organization. These activities are best left to humans.
There are activities where data has to be collated, cross referenced with existing databases and patterns identified. AI usage should start with that.
Just to show how AI works differently when given a task that is looking up a database, here is something that I asked the AI model that already had the context for policy development:
Identify the top 10 list of AI risks for PentaQube
That’s it.
Here is the response:
Top 10 AI Risks for PentaQube
Based on PentaQube’s business as a cybersecurity consulting and SaaS tool company, here are the top 10 AI risks that should be prioritized:
1. Client Data Exposure
Risk: Inadvertent uploading of client confidential information to GenAI tools could lead to data breaches, compliance violations, and damage to client relationships. Impact: Loss of client trust, potential legal liability, regulatory penalties under DPDP Act, and damage to PentaQube’s reputation as a cybersecurity company.
2. Intellectual Property Leakage
Risk: Sharing proprietary code, algorithms, or methodologies from PentaQube’s risk assessment and TPRM solutions with GenAI tools could compromise valuable IP. Impact: Loss of competitive advantage, reduced market value of PentaQube’s SaaS products, and potential business impact.
3. Inaccurate Security Recommendations
Risk: Relying on unverified GenAI outputs for cybersecurity consulting could lead to incorrect or incomplete security recommendations to clients. Impact: Clients implementing flawed security controls, potential security breaches at client organizations, and professional liability for PentaQube.
4. AI-Generated Code Vulnerabilities
Risk: Incorporating GenAI-generated code into PentaQube’s software solutions without thorough review could introduce security vulnerabilities. Impact: Security flaws in PentaQube’s SaaS products, potential exploitation by attackers, and undermining the company’s cybersecurity credibility.
5. Data Poisoning via AI Training
Risk: GenAI providers could potentially use PentaQube’s uploaded data to train their models, creating data persistence beyond intended use. Impact: Unintended data retention, potential future exposure of information, and loss of control over proprietary information.
6. Compliance Violations
Risk: Using GenAI tools in ways that violate ISO 27001 requirements or the Indian DPDP Act provisions. Impact: Regulatory penalties, loss of certifications, and reputational damage as a cybersecurity provider that fails to maintain compliance.
7. Shadow AI Usage
Risk: Qubers using unapproved GenAI tools without proper assessment, potentially bypassing security controls. Impact: Uncontrolled data sharing, exposure to tools with inadequate security practices, and inability to monitor compliance with policy.
8. Over-Reliance on AI
Risk: Excessive dependence on GenAI tools for cybersecurity analysis without appropriate human expertise and verification. Impact: Degradation of cybersecurity expertise within the organization, missed security insights that AI cannot identify, and potential homogenization of security approaches.
9. Confidentiality Breaches via Prompt Engineering
Risk: Advanced users could potentially extract confidential information from GenAI tools through sophisticated prompt engineering techniques if the tools retain information from previous interactions. Impact: Unauthorized access to sensitive information, potential data breaches, and compromise of client or company data.
10. AI-Enhanced Social Engineering
Risk: Threat actors using GenAI tools to create convincing phishing attempts or social engineering attacks targeting PentaQube employees to gain access to client data or systems. Impact: Successful breaches of PentaQube systems, compromise of client information, and damage to the company’s cybersecurity reputation.
These risks are particularly significant for PentaQube given its dual role as both a cybersecurity consultant and a provider of security-focused SaaS solutions. The company’s credibility depends on maintaining the highest standards of data security and providing accurate security guidance, making AI governance especially critical.
This is a remarkably accurate representation of the risks of using Gen AI for PentaQube. With this as a starting point, you can elaborate on various risks and implement mitigation controls.
Q: So what is the verdict on Gen AI usage? Do I see conflicting opinions?
A:
Gen AI should be adopted. There are no conflicting opinions there. Cybersecurity professionals must understand the limits on the output provided by Gen AI. For example, in the risk case, use Gen AI to identify risks, but not to classify and mitigate, though Gen AI will give you that too. Here is the example:
It seems deceptively useful. However, the levels are all made up and the impacts are merely suggestions.
Concluding Remarks
AI for cybersecurity is a given. It comes down to how to implement it. From this small experiment, here are a few rules to follow when using AI for cybersecurity.
📃 Context is critical. If you are using AI in GRC, spend a lot of time preparing the context for providing to the LLM. (Give me six hours to chop down a tree and I will spend the first four sharpening the axe.🪓 - Abraham Lincoln)
Identify tasks that AI can ✅ do and more importantly, tasks that AI cannot ❌ do. Tasks that require interaction and thinking for GRC are tasks that AI should not be doing. Searching, collating and cross referencing are tasks that AI can do.
AI can help you put sentences and words together, format them and make them more legible - like a good assistant. Use it for that. Brainstorm outside AI, document using AI.