Artificial Intelligence Labs
Explore 5 labs in Artificial Intelligence.
Prompt injection happens when untrusted input is interpreted as instruction for large language model (LLM) systems. This is misused to inject malformed instructions and bypass the AI system restrictions, policies and disclose sensitive data.
The root cause of the problem is mixture of data and system instructions.
NOTE: At the time of writing the remediation for prompt injection is an open problem and there is no best practice security recommendation.
Select a language to explore available labs for this vulnerability.
Try adjusting your language filter.
Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.
Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.
Read more