🎄 Join our Annual Holiday wargame and win prizes!


Command and Scripting Interpreter LLM

Adversaries craft malicious inputs (prompts) that are interpreted and executed by the Language Model, leading to unintended actions. The LLM itself acts as the interpreter of the injected commands.

Metadata

  • Severity: low
  • Slug: command-and-scripting-interpreter-llm

MITRE

  • T1055.002: Process Injection: Portable Executable Injection
  • T1059.006: Command and Scripting Interpreter: Python

Available Labs

Select a language to explore available labs for this vulnerability.

Deco line
Deco line

Play AppSec WarGames

Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.

Deco line
Deco line

Got a comment?

Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.

Read more