Command and Scripting Interpreter LLM
Adversaries craft malicious inputs (prompts) that are interpreted and executed by the Language Model, leading to unintended actions. The LLM itself acts as the interpreter of the injected commands.
Metadata
- Severity: low
- Slug: command-and-scripting-interpreter-llm
MITRE
- T1055.002: Process Injection: Portable Executable Injection
- T1059.006: Command and Scripting Interpreter: Python