IBM Cloud Global

Cloud Global

Our mission is to provide clients with an online user community of industry peers and IBM experts, to exchange tips and tricks, best practices, and product knowledge. We hope the information you find here helps you maximize the value of your IBM Cloud solutions.

 View Only

Prompt Security with IBM Cloud: understanding and mitigating GenAI risks

By Gautam Zalpuri posted 8 days ago

  

TL;DR: As GenAI becomes embedded in enterprise workflows, prompt security is emerging as a critical risk surface. This blog explores how IBM Cloud services - IAM, Secrets Manager, Key Protect, and more - help secure prompts using Zero Trust principles and the OWASP Top 10 for LLMs.



Introduction


As Generative AI becomes embedded across enterprise workflows, the security of prompts - the instructions that drive large language models (LLMs) - has become a critical, yet often overlooked, surface area. Prompts can encode sensitive business logic, internal process instructions, API keys, or even proprietary reasoning. If misused or exposed, they can lead to data leakage, model abuse, or operational risk.


IBM watsonx, IBM’s enterprise AI platform provides a secure foundation for developing and governing AI models at scale. When it comes to deploying GenAI applications - leveraging managed infrastructure and services -  IBM Cloud offers the tools needed to secure those applications with Zero Trust principles. Learn about watsonx here.


This blog explores how to secure prompts using Zero Trust principles, the OWASP Top 10 for LLMs, and IBM Cloud’s enterprise-grade security services.



  • Learn more about Zero Trust here.

  • Learn more about AI attack surface here.

  • OWASP provides a risk based security framework specifically tailored for LLMs and Gen AI systems. Learn more here.



Why Prompt Security Matters


Prompts are not just inputs - they are executable instructions that can trigger actions, invoke apis or influence decisions. Prompt level risks are no longer theoretical. As LLM-based tools become embedded in finance, HR, Customer Support, DevOps, and other regulated workflows, prompt misuse becomes a real world risk.



How IBM Cloud helps you secure your GenAI applications with Zero Trust Principles:



  1. Fine grained IAM + Context Based Restrictions (CBR) with IBM Cloud IAM: Prompts should only be constructed, executed and read by authorized identities. IBM Cloud IAM helps enforce this via least-privilege roles based on its Attribute Based Access Control (ABAC) system. CBR enforces dynamic access policies based on attributes like source IP, VPC or endpoint type, which helps to further shrink the attack surface. For example, customers may choose to configure a customer support LLM to restrict access to service endpoints in the EU via a corporate VPN.

    • This is important because it prevents over-privileged access that could be used for exfiltration or prompt injection vectors, mapping directly to LLM06: Prevent Excessive Agency, and LLM01: Restrict injection surface via access control.

    • To learn more about IBM Cloud IAM, start here.

    • To learn more about CBR, start here.



  2. Secrets Manager for prompt confidentiality: Integrating prompt management into CI/CD pipelines from the outset ensures that security is embedded early in the development process, aligning with shift-left principles. Coupled with Secrets Manager, this approach enables secure, auditable, and repeatable prompt deployments—ensuring sensitive logic is never hardcoded and remains protected throughout its lifecycle. Prompt content can be versioned and tracked, avoiding static embedding in code or infrastructure. Secrets Manager secures secrets using encryption root keys from Key Protect, and integrates natively with IAM and CBR, enabling fine-grained access control and isolation by workload, role, or environment.

    • This aligns with OWASP LLM01, LLM02: Sensitive Information Disclosure, LLM04: Data and Model Poisoning, LLM07: System Prompt Leakage (by avoiding hard coding secrets in system prompts).

    • To learn more about Secrets Manager, start here.



  3. Key Protect for data at rest encryption: encrypt model configuration files, prompt templates and logs using customer managed keys (BYOK). This helps protect data even in the event of a storage compromise.

    • This aligns with OWASP LLM04: Data and Model Poisoning, LLM02: Ensure encrypted handling of prompt metadata.

    • To learn more about Key Protect, start here.



  4. IBM Cloud Logs for full activity tracking: logging and activity tracking across all IBM Cloud services can help with auditability, traceability and compliance. To learn more about IBM Cloud Logs, start here.

  5. IBM Cloud Projects and Deployable Architectures: start secure with an IaC shift-left approach with pre-built, modular blueprints that enable secure, scalable, and efficient deployment of cloud-native solutions. To learn more about Deployable Architectures, start here Learn about Projects here



Conclusion


Generative AI introduces transformative potential - but also introduces a new class of risks. Prompts are not user inputs; they are executable instructions that can trigger actions, invoke sensitive systems, or influence decisions. A secure-by-design GenAI architecture must include:



  • identity driven access controls (IAM + CBR)

  • runtime secrets isolation (Secrets Manager)

  • persistent encryption and key control (Key Protect)

  • governance of prompt life cycle (IBM Cloud Projects, IAM, Secrets Manager, IBM Cloud Logs).


IBM Cloud offers a composable, secure foundation to build and scale GenAI applications with confidence, when configured and used in accordance with best practices and the IBM Cloud shared responsibility model, described here.


For best practices related to IBM Cloud account security, start here.



If you’re ready to test drive an llm deployment on IBM Cloud Code Engine before locking it down, start here.



Sign up for an IBM Cloud Account here.



#community-stories3

0 comments
49 views

Permalink