Artificial Intelligence Labs
Explore 1 lab in Artificial Intelligence.
Improper Output Handling occurs when outputs generated by Large Language Models (LLMs) are not properly validated, sanitised, or managed before being passed to downstream components or systems. This vulnerability arises because LLM-generated content is influenced by input prompts, effectively giving users indirect access to certain functionality. Improper Output Handling focuses on ensuring the safety of outputs before further processing. Exploitation of this vulnerability can lead to security issues such as cross-site scripting (XSS), cross-site request forgery (CSRF) in web browsers, server-side request forgery (SSRF), privilege escalation, or remote code execution on backend systems.
Select a language to explore available labs for this vulnerability.
Try adjusting your language filter.
Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.
Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.
Read more