🎄 Join our Annual Holiday wargame and win prizes!


Model Context Protocol Challenges

28/08/2025

Introducing AI MCP Challenges:

Model Context Protocols (MCP) are fast becoming the backbone of how large language models interact with existing codebases. They enable LLMs to fetch data, integrate with tools, and carry out tasks beyond simple text completion. But as with any new protocol, they also expand the attack surface.

From an application security perspective, MCP introduces new trust boundaries:

  • The client trusts the server to provide accurate, safe context.

  • The server trusts the client to make requests in expected ways.

  • Both trust the protocol layer not to leak, corrupt, or allow malicious injection of data.

That’s a lot of assumptions. And where assumptions pile up, vulnerabilities emerge.

These boundaries can be exploited if not secured. This is where our new AI MCP Challenges come in. By practicing hands-on attacks and defenses, you’ll learn what “secure-by-design” really means for AI protocols.

We’ve built a series of AI MCP Challenges that let you think like an attacker, then flip roles and act like a defender. Each scenario is modeled after a potential real-world class of protocol vulnerability:

  • Model Poison – Can you spot and stop malicious data injection before it poisons the model?

  • Tool Shadowing – What happens when a malicious tool masquerades as a trusted one?

  • Line Jumping – Boundaries are meant to be broken. Can you enforce them?

  • Tool Collision – When multiple tools overlap or conflict, who wins?

  • Rug Pull – The client trusts the server, what if the server is compromised?

Each challenge isn’t just about breaking things — it’s about understanding where the cracks are and patching them.

For a limited time, these MCP challenges are free for everyone to try via our Weekly Incident Game.

Whether you’re a developer curious about secure AI integrations, a security engineer poking at LLMs, or just a geek who loves protocols and exploits, you’ll find something fun (and maybe a bit dangerous) here.

:backhand_index_pointing_right: Go break things. See how secure (or insecure) AI protocols and their tool calls really are.

Deco line
Deco line

Play AppSec WarGames

Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.

Deco line
Deco line

Got a comment?

Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.

Read more