Back to resources
Maturity & pilotage

How to manage your teams' DevSecOps maturity

Move from declarative maturity to measured maturity. Methodology, KPIs and board-ready reporting.

11 min read
2.3×
faster progression when you measure maturity
+37%
typical gap between declared and measured scores
<15%
of SAMM programs get past level 2

Why DevSecOps maturity became a board-level KPI

In 2026, no executive committee funds "security" without a metric. NIS2, DORA, cyber insurance, M&A due diligence: all now demand an objective measure of maturity.

A CISO walking into a board meeting with a slide saying "we have a SAST and a DAST" has lost the conversation before it started. The one showing a quantified maturity score, a benchmark by BU and a 12-month trajectory has won theirs.

The challenge is no longer convincing that security is needed. It's proving that what you do produces a measurable result.

SAMM, DSOMM, BSIMM: which one to choose?

Three frameworks serve as reference for measuring DevSecOps maturity. They are not interchangeable: each answers a different need.

OWASP SAMM

Software Assurance Maturity Model. A generalist framework covering 5 functions (Governance, Design, Implementation, Verification, Operations) and 15 security practices. Recommended for a high-level program view.

DSOMM

DevSecOps Maturity Model (also OWASP). More operational and more CI/CD pipeline-oriented than SAMM. Ideal for a tech-heavy mid-market org that wants to measure maturity at the delivery pipeline level.

BSIMM

Building Security In Maturity Model. A descriptive framework based on observation of hundreds of real programs. Useful for benchmarking your org against its sector, less so for day-to-day pilotage.

Our recommendation

For most French mid-market and enterprise organizations, start with DSOMM (operational, measurable) and complete with SAMM for board reporting. BSIMM is useful for external benchmarking phases.

The limits of declarative questionnaires

The legacy SAMM measurement method relies on annual questionnaires. One person per team answers 30–50 questions about their practices. The score is computed from those answers.

This method has three structural problems.

  1. 1Self-assessment bias: nobody ticks "level 1" for their own team. The average is always pulled upward.
  2. 2Cold data: an annual questionnaire does not reflect the real state of the pipeline at audit time. Nothing is tracked between two questionnaires.
  3. 3Not actionable: a SAMM score in a PDF doesn't tell you which vulnerability to fix tomorrow morning. It doesn't plug into any tool.

« We ran a SAMM questionnaire every year. The score kept climbing. So did incidents. I stopped the questionnaire. »

CISO, industrial mid-market (anonymized)

Measured vs declared maturity

The revolution of recent years is the ability to compute maturity from real data rather than declarations. Concretely, instead of asking "do you run SAST?", we directly observe in CI/CD pipelines whether a SAST runs, how often, on how many repos, and with what triage rate.

What changes

  • The score becomes continuous rather than annual.
  • It reflects reality, not perception.
  • It's actionable: you can click on a gap to see exactly which pipeline fails the practice.
  • It's comparable across teams because the same method is applied.
The surprising gap

In organizations moving from a declarative SAMM to a measured SAMM, the average observed gap is +37% — against the declarative score. In other words: you're probably less mature than you think.

The 5 dimensions to measure

A useful maturity score is computed across 5 dimensions, matching the 5 SAMM functions. Each must be instrumented with objective signals from existing tools.

  1. 1Governance — existence and enforcement of a security policy, team training, roles and responsibilities.
  2. 2Design — threat modeling, security architecture reviews, security requirements in user stories.
  3. 3Implementation — use of SAST/SCA tools, secrets management, dependency security, build security.
  4. 4Verification — dynamic testing, pentests, bug bounty, security code reviews.
  5. 5Operations — production vulnerability management, monitoring, incident response, patch management.

Step-by-step scoring methodology

Here is the methodology Cyber Coach applies to compute an objective per-team maturity score.

  1. 1Collect signals — API connections to existing security, CI/CD and ticketing tools.
  2. 2Normalize — map each signal to a SAMM/DSOMM practice. Example: "SAST runs on main at every push" = positive Implementation-1 signal.
  3. 3Score each practice — for each practice, compute a 0–100 score based on multiple weighted signals.
  4. 4Aggregate by dimension — average the practices of a dimension to get the dimension score.
  5. 5Aggregate by team — team-level score across the 5 dimensions, projectable onto a radar.
  6. 6Aggregate by BU — weighted average of teams in a BU, usable in board reporting.

Benchmarking your BUs against each other

Internal benchmarking is often more powerful than sector benchmarking. It surfaces gaps between teams within the same organization and creates a progress dynamic.

Three concrete uses of internal benchmarking:

  • Identify leading teams and make them references for others.
  • Target lagging teams and allocate coaching resources.
  • Create a monthly ranking visible to operational management — provided it is used as a progress tool, not a sanction one.

Get your maturity radar in under 15 minutes

Cyber Coach's Free plan computes your maturity score across the 5 dimensions, per team and per BU. No credit card.

Board reporting: 3 charts that speak to the board

Boards don't read 40-page PDFs. For a DevSecOps report to pass executive review, you need three charts — and not one more.

  1. 1The 5-dimension radar — visualizes the current maturity of the organization, projected on the 5 SAMM functions. Readable in 10 seconds.
  2. 2The 12-month trajectory — evolution curve of the global score. Shows direction of travel. More important than absolute level.
  3. 3The top 3 critical gaps — a short, actionable list of the 3 priority fixes with associated business impact.

A good board presentation fits in 3 slides: where we are, where we're going, what we'll do next. Nothing more.

Frequently asked questions

DSOMM if you want operational pilotage at the delivery pipeline level. SAMM if you want a broader view including governance and program management. In practice, start with DSOMM and complement with SAMM for reporting.

Go further