🎄 Join our Annual Holiday wargame and win prizes!


LLM Information Disclosure

Sensitive information poses risks to both the LLM itself and its application context. This includes personally identifiable information (PII), financial data, health records, confidential business information, security credentials, and legal documents. Additionally, proprietary models often have unique training methodologies and source code that are highly sensitive, particularly in closed or foundation models, making them targets for theft or misuse.

Remediation

  • Limit the amount of sensitive information used during model training and inference to the bare minimum necessary for functionality.
  • Mask or redact sensitive details before using them in prompts or as part of model inputs.
  • Apply differential privacy to anonymise training data and prevent leakage of sensitive information through model outputs.

Metadata

  • Severity: high
  • Slug: llm-information-disclosure

CWEs

  • 359: Exposure of Private Personal Information to an Unauthorized Actor

OWASP

  • LLM02:2025: Sensitive Information Disclosure

Available Labs

Open Artificial Intelligence labs in SecDim Play for this vulnerability.

Deco line
Deco line

Play AppSec WarGames

Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.

Deco line
Deco line

Got a comment?

Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.

Read more