Insecure Supply Chain
Insecure supply chain vulnerabilities occur when a machine learning (ML) model, particularly a large language model (LLM), is sourced from an untrusted or compromised origin. An adversary can exploit this weakness by embedding malicious behaviour or backdoors into the model. These tainted models may produce manipulated outputs, leak sensitive input data, or behave unpredictably when triggered by specific inputs.
Remediation
- Use secure and safe deserialisation libraries to load untrusted model
- Use cryptographic methods like checksums, signatures, or hash verification to ensure the model has not been tampered with.
- Only acquire models from reputable and verified sources, such as official repositories or known organisations with strong security practices.
- Create a standardised procedure for reviewing and approving models before deploying them in production environments.
- Use containerisation or sandboxes to isolate the LLM's execution environment, preventing unauthorised access to sensitive data or system components.
Metadata
- Severity: critical
- Slug: insecure-supply-chain
CWEs
- 1357: Reliance on Insufficiently Trustworthy Component
OWASP
- LLM03:2025: Supply Chain
- A9:2017: Using Components with Known Vulnerabilities