Artificial Intelligence Labs
Explore 3 labs in Artificial Intelligence.
Insecure supply chain vulnerabilities occur when a machine learning (ML) model, particularly a large language model (LLM), is sourced from an untrusted or compromised origin. An adversary can exploit this weakness by embedding malicious behaviour or backdoors into the model. These tainted models may produce manipulated outputs, leak sensitive input data, or behave unpredictably when triggered by specific inputs.
Select a language to explore available labs for this vulnerability.
Try adjusting your language filter.
Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.
Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.
Read more