Artificial Intelligence Labs
Explore 1 lab in Artificial Intelligence.
Line-jumping is a prompt-injection variant where an attacker injects control characters, unexpected newlines, or specially crafted tokens into an MCP input (tool manifest, example prompts, or tool output) so the LLM interprets or executes content out of its intended sequence. By forcing the model to "jump" to attacker-placed lines (hidden instructions, alternate sections, or command-like fragments), the adversary induces the model to follow malicious directives that were meant to remain inert or contextual. This breaks assumptions about linear parsing and can lead to unauthorised actions, data leakage, or command execution via tools.
Select a language to explore available labs for this vulnerability.
Try adjusting your language filter.
Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.
Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.
Read more