🎄 Join our Annual Holiday wargame and win prizes!


Prompt Injection

Prompt injection happens when untrusted input is interpreted as instruction for large language model (LLM) systems. This is misused to inject malformed instructions and bypass the AI system restrictions, policies and disclose sensitive data.

The root cause of the problem is mixture of data and system instructions.

Remediation

  • Utilise token preventative feature that comes with some API (see https://platform.openai.com/docs/api-reference/chat/create#chat/create-stop[OpenAI API stop parameter].
  • Provide a more restrictive system instructions.

NOTE: At the time of writing the remediation for prompt injection is an open problem and there is no best practice security recommendation.

Metadata

  • Severity: high
  • Slug: prompt-injection

CWEs

  • 94: Improper Control of Generation of Code ('Code Injection')
  • 1427: Improper Neutralization of Input Used for LLM Prompting

OWASP

  • A03:2021: Injection
  • LLM01:2025: Prompt Injection

Available Labs

Select a language to explore available labs for this vulnerability.

Deco line
Deco line

Play AppSec WarGames

Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.

Deco line
Deco line

Got a comment?

Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.

Read more