Artificial Intelligence Labs
Explore 1 lab in Artificial Intelligence.
LLM plugins are extensions that operate automatically during user interactions and are driven entirely by the model, without application-level control over their execution. To address context-size limitations, plugins often accept free-text inputs from the model without validation or type checking. This creates an opportunity for attackers to craft malicious requests, potentially leading to severe consequences such as remote code execution, data exfiltration, or privilege escalation.
Select a language to explore available labs for this vulnerability.
Try adjusting your language filter.
Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.
Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.
Read more