🎄 Join our Annual Holiday wargame and win prizes!


Why We’ve Introduced an AI-Powered Secure Code Learning Mentor

24/03/2025

It’s no secret: large language models (LLMs) are transforming how developers write code, ship features, and even fix vulnerabilities. But are LLMs actually good at securing code? As the Italian saying goes: “non è tutto rose e fiori” — it’s not all roses and flowers.

For straightforward bugs — like simple input validation issues or common coding blunders — a well-crafted prompt can often yield a patch.

But here’s the catch: while LLM-generated fixes might work for low-hanging fruit, they can also give a false sense of security when applied to real-world applications involving complex vulnerabilities, multiple files, or micro services architecture.

This leads to a critical question: how can a developer verify what the LLM suggests if they don’t yet have the skills to spot and fix security issues themselves?
There are three ways to approach this:

  1. Train developers in secure coding.
  2. Train developers to recognise LLM mistakes.
  3. Do both.

At SecDim, we chose the third path — and built an LLM-powered chatbot deeply integrated into our platform. It’s the best way to help developers level up in both secure coding and prompt engineering.

Introducing Dr. SecDim

Dr. SecDim is an AI mentor purpose-built to support developers on their journey to mastering secure coding. Rather than spoon-feeding answers, it encourages creativity and analytical thinking — offering guidance that adapts to each developer’s unique way of building software.

One of its key strengths is helping developers recognise the limitations and mistakes of LLMs. Let’s say a developer already knows about a specific vulnerability. They can use Dr. SecDim to test AI-generated solutions, understand where things might go wrong, and spot subtle errors or oversights. Even when the developer isn’t deeply familiar with a particular vulnerability, this kind of hands-on exploration helps build intuition — and sharpens their ability to vet LLM output critically.

With Dr. SecDim, we’re giving developers back the flexibility (analytical thinking) and creativity (lateral thinking) needed to fix vulnerable code the right way — without falling into the trap of formulaic, one-size-fits-all solutions. Secure coding isn’t about following exact recipes. Telling developers to “just change this line,” “use that function,” or “load this library” might sound efficient — but it’s often ineffective and feels unnatural.
(We dive deeper into this in our post, Why Secure Code Training Sucks.)

Understanding of the context and objectives

Dr. SecDim is deeply integrated with the SecDim Cloud Development Environment (CDE). It understands the structure and intent behind each secure coding challenge and lab. This context allows it to deliver guidance and explanations that are not just technically correct, but aligned with the learning objectives of each exercise.

A savvy engineer usually isn’t asking, “Which line do I change to fix this SQL injection?” — they’re asking, “How can I apply SecDim’s defensive design patterns to improve the architecture and avoid this class of issue altogether?

Dr. SecDim is built to support that level of thinking. It helps users move beyond patching bugs and into designing more resilient systems — the kind of mindset shift that actually sticks.

Real-Time Code Feedback

As developers work through challenges, Dr. SecDim analyses their code changes in real time — reviewing each attempt and explaining the root causes behind incorrect fixes. This kind of dynamic guidance mirrors how developers actually work: try, fail, learn, pivot.

Security Fixes That Is Memorable

Dr. SecDim doesn’t just flag issues — it offers practical, actionable fixes. By breaking down the why behind each recommendation, it helps users understand how specific changes improve security. The goal isn’t just to get the code to “pass” — it’s to build lasting, transferable knowledge about writing secure software.

An Interactive Learning Experience

Dr. SecDim’s conversational style makes it feel like you’ve got a mentor on standby — ready to jump in when you’re stuck, or just curious about best practices. Whether you’re debugging a tricky issue or exploring secure design patterns, it adapts to your pace and learning style, offering just enough guidance to keep you moving without killing the learning momentum.

Better Prompt Engineering for Secure Coding

Using Dr. SecDim doesn’t just improve secure coding skills — it also helps users become better at working with LLMs. Over time, developers get better at spotting AI mistakes, understanding limitations, and crafting stronger prompts to get more accurate, useful results. A well-structured prompt can lead to clear guidance, solid explanations, and useful code examples.

But even a poorly worded prompt can turn into a learning opportunity — taking users on a deeper exploration and guiding them (with a few bumps) toward the right solution.

In both cases, the user wins: either through efficiency, or through experience.

Conclusion

This is a powerful reminder that as AI tools evolve, the role of developers will increasingly shift from writing every line of code to focusing on oversight, rigorous reviews, comprehensive testing, and crafting precise prompts to guide AI outputs.
It’s no longer enough to just know how to code securely. Developers also need to get good at spotting LLM mistakes and understanding where AI might fall short.
Dr. SecDim is built to help with both.

You can learn more about Dr. SecDim on our Support page or trying it out in one of our Play challenges

Deco line
Deco line

Play AppSec WarGames

Want to skill-up in secure coding and AppSec? Try SecDim Wargames to learn how to find, hack and fix security vulnerabilities inspired by real-world incidents.

Deco line
Deco line

Got a comment?

Join our secure coding and AppSec community. A discussion board to share and discuss all aspects of secure programming, AppSec, DevSecOps, fuzzing, cloudsec, AIsec code review, and more.

Read more