Skip to main content
Background Image
  1. Blog/

Rethinking Code Quality Analysis

Author
Glen Baker
Building open source tooling
Table of Contents

The Problem with Traditional Static Analysis
#

When you run a typical static analysis tool on your codebase, you get flooded with alerts. Hundreds of warnings about cyclomatic complexity, function length, and nesting depth. The tools treat all complexity equally—a simple validation function with 20 identical if statements gets the same severity rating as genuinely complex business logic with intricate state transitions.

The result? Alert fatigue. Developers learn to ignore the noise, and genuinely risky code gets lost in the shuffle.

A Tale of Two Functions
#

Try it yourself: All code examples in this post are available at github.com/iepathos/debtmap/tree/master/examples/why-debtmap-samples. Clone the repo and run debtmap analyze . to see the actual output.

Consider these two functions:

Function A: High Cyclomatic Complexity, Low Risk
#

fn validate_config(config: &Config) -> Result<()> {
    if config.output_dir.is_none() {
        return Err(anyhow!("output_dir required"));
    }
    if config.max_workers.is_none() {
        return Err(anyhow!("max_workers required"));
    }
    if config.timeout_secs.is_none() {
        return Err(anyhow!("timeout_secs required"));
    }
    // ... 15 more identical checks
    Ok(())
}

Traditional tools say: “Cyclomatic complexity: 21 - WARNING!”

Reality: An experienced developer reads this in seconds. It’s a repetitive validation pattern. Yes, 20 branches, but they’re all identical in structure. Low cognitive load, low risk.

Function B: Moderate Cyclomatic Complexity, High Risk
#

fn reconcile_state(current: State, target: State) -> Result<Actions> {
    let mut actions = vec![];

    if current.mode != target.mode {
        if current.has_active_connections() {
            if target.mode == Mode::Offline {
                actions.push(drain_connections());
                if current.has_pending_writes() {
                    actions.push(flush_writes());
                }
            }
        } else if target.allows_reconnect() {
            actions.push(establish_connections());
        }
    }

    if let Some(diff) = calculate_config_diff(&current, &target) {
        if diff.requires_restart() {
            actions.push(schedule_restart());
        }
    }

    Ok(actions)
}

Traditional tools say: “Cyclomatic complexity: 9 - acceptable”

Reality: This function has nested conditionals with complex interdependencies. What happens if mode changes but has_active_connections() is false? The cognitive load is high, the logic is intricate, and bugs could easily hide in edge cases.

Lower cyclomatic score, vastly different actual complexity.

Traditional Tools vs Debtmap: Side-by-Side Comparison
#

Let’s see how a traditional static analysis tool (Codacy with Lizard) handles these exact functions compared to debtmap. Both outputs are from running the tools on our sample code.

Codacy/Lizard Output
#

================================================
  NLOC    CCN   token  PARAM  length  location
------------------------------------------------
      63     21    421      1      63 validate_config@30-92@./src/validation.rs
      21      8    140      2      24 reconcile_state@81-104@./src/state_reconciliation.rs

!!!! Warnings (cyclomatic_complexity > 15) !!!!
================================================
      63     21    421      1      63 validate_config@30-92@./src/validation.rs

Codacy’s verdict: validate_config() has CCN=21 (⚠️ WARNING), while reconcile_state() has CCN=8 (✅ OK).

The traditional tool flags the repetitive validation function as critical and completely misses the genuinely complex state reconciliation logic.

Debtmap Output
#

#1 SCORE: 4.15 [MEDIUM]
├─ LOCATION: ./src/state_reconciliation.rs:81 reconcile_state()
├─ IMPACT: -4 complexity, -1.5 risk
├─ COMPLEXITY: cyclomatic=9 (dampened: 4, factor: 0.51),
   est_branches=9, cognitive=16, nesting=4, entropy=0.28
├─ WHY THIS MATTERS: Coordinator pattern detected with 4 actions
   and 2 state comparisons. Extracting transitions will reduce
   complexity from 9/16 to ~4/11.
├─ RECOMMENDED ACTION: Extract state reconciliation logic into
   transition functions

#2 SCORE: 3.10 [LOW]
├─ LOCATION: ./src/validation.rs:30 validate_config()
├─ IMPACT: -10 complexity, -1.0 risk
├─ COMPLEXITY: cyclomatic=21 (dampened: 10, factor: 0.50),
   est_branches=21, cognitive=6, nesting=1, entropy=0.33
├─ WHY THIS MATTERS: Repetitive validation pattern detected
   (entropy 0.33, 20 checks). Low entropy (0.33) indicates
   boilerplate, not complexity. Adjusted complexity: 21 → 13
   (reflects actual cognitive load). Refactoring improves
   maintainability and reduces error-prone boilerplate.
├─ RECOMMENDED ACTION: Replace 20 repetitive validation checks
   with declarative pattern

Debtmap’s verdict: reconcile_state() has higher priority than validate_config().

Notice what debtmap detects that Codacy completely misses:

  1. Pattern Recognition:

    • Identifies the “Coordinator pattern” in reconcile_state() with 4 actions and 2 state comparisons
    • Detects the “Repetitive validation pattern” (entropy 0.33) in validate_config()
  2. Cognitive Load Adjustment:

    • validate_config(): Dampens complexity from 21 → 13 (50% reduction) because it’s “boilerplate, not complexity”
    • reconcile_state(): Recognizes high cognitive load (16) despite lower cyclomatic complexity
  3. Actionable Recommendations:

    • State reconciliation: “Extract state reconciliation logic into transition functions” (specific architectural guidance)
    • Validation: “Replace 20 repetitive validation checks with declarative pattern” (suggests the right refactoring approach)
  4. Impact Quantification:

    • Shows exactly how much complexity reduction to expect: 9/16 → 4/11 for the coordinator pattern
    • Validation impact is higher (-10 complexity) but lower priority because it’s mechanical, not cognitive

This is the fundamental difference: Traditional tools measure branch count. Debtmap measures cognitive difficulty and provides architectural insight.

How Debtmap Is Different
#

Debtmap takes a fundamentally different approach by combining multiple analysis techniques:

1. Entropy-Based Complexity
#

Debtmap uses information theory to measure code complexity. Specifically, it calculates the entropy (unpredictability) of code patterns.

High entropy (0.7-1.0):

  • Diverse, unpredictable logic
  • Many different code paths doing different things
  • Actually complex—hard to reason about

Low entropy (0.0-0.3):

  • Repetitive patterns
  • Predictable structure
  • Mechanically complex but easy to understand

Let’s see this in action on our validation function (actual output from samples):

#2 SCORE: 3.10 [LOW]
├─ LOCATION: ./src/validation.rs:30 validate_config()
├─ IMPACT: -10 complexity, -1.0 risk
├─ COMPLEXITY: cyclomatic=21 (dampened: 10, factor: 0.50),
   est_branches=21, cognitive=6, nesting=1, entropy=0.33
├─ WHY THIS MATTERS: Repetitive validation pattern detected
   (entropy 0.33, 20 checks). Low entropy (0.33) indicates
   boilerplate, not complexity. Adjusted complexity: 21 → 13
   (reflects actual cognitive load).
├─ RECOMMENDED ACTION: Replace 20 repetitive validation checks
   with declarative pattern

Debtmap recognizes the repetitive pattern (entropy=0.33) and dampens the complexity by 50% (21 → 10), understanding that this is “boilerplate, not complexity.” Despite the high branch count, it gets a LOW priority score of 3.10.

Compare with the state reconciliation function (actual output from samples):

#1 SCORE: 4.15 [MEDIUM]
├─ LOCATION: ./src/state_reconciliation.rs:81 reconcile_state()
├─ IMPACT: -4 complexity, -1.5 risk
├─ COMPLEXITY: cyclomatic=9 (dampened: 4, factor: 0.51),
   est_branches=9, cognitive=16, nesting=4, entropy=0.28
├─ WHY THIS MATTERS: Coordinator pattern detected with 4 actions
   and 2 state comparisons. Extracting transitions will reduce
   complexity from 9/16 to ~4/11.
├─ RECOMMENDED ACTION: Extract state reconciliation logic into
   transition functions

Debtmap identifies this as higher priority than the validation function, even though it has much lower cyclomatic complexity (9 vs 21). Debtmap detects:

  1. Coordinator pattern - 4 actions with 2 state comparisons (architectural smell)
  2. High cognitive complexity (16) - nested conditionals are genuinely hard to reason about
  3. Deep nesting (4 levels) - increases mental load
  4. Greater risk impact (-1.5) - complex state transitions are error-prone

The validation function has higher mechanical complexity, but the state reconciliation has higher cognitive complexity.

2. Coverage-Risk Correlation
#

Debtmap is unique in natively combining complexity analysis with test coverage to compute risk scores.

Why this matters:

  • Complex code with good tests = managed risk (lower priority)
  • Simple code without tests = low risk
  • Complex code without tests = CRITICAL GAP

Example output:

#5 SCORE: 18.4 [CRITICAL]
├─ LOCATION: src/analyzer.rs:805
├─ FUNCTION: parse_error_flow()
├─ COMPLEXITY: cyclomatic=21 (adj:12), cognitive=63, nesting=5
├─ TEST COVERAGE: 0%
├─ IMPACT: -10 complexity reduction, -6.4 risk reduction
└─ WHY THIS MATTERS: High complexity 21/63 makes function
   hard to test and maintain. Error handling without tests
   creates critical risk.

Debtmap prioritizes untested complex code over tested complex code, because untested complexity is what creates risk.

3. Actionable Recommendations with Impact Quantification
#

Most tools tell you what is wrong. Debtmap tells you what to do and what impact it will have.

Traditional tools typically report:

Function 'process_request' has complexity 15 (threshold: 10)
Severity: Major

Debtmap instead provides:

#1 SCORE: 181 [CRITICAL]
└─ type_tracker/mod.rs (3096 lines, 106 functions)
└─ WHY: This struct violates single responsibility principle
   with 42 methods and 12 fields across 8 distinct responsibilities.
└─ ACTION: Split by data flow:
   1) Input/parsing functions
   2) Core logic/transformation
   3) Output/formatting

   RECOMMENDED SPLITS (2 modules):
   - mod_parsing.rs (6 methods, ~120 lines)
   - mod_utilities.rs (22 methods, ~440 lines)

   IMPLEMENTATION ORDER:
   [1] Start with lowest coupling modules
   [2] Move 10-20 methods at a time, test after each move
   [3] Keep original file as facade during migration

└─ IMPACT: Reduce complexity by 80%, improve testability,
   enable parallel development

Debtmap provides:

  • Specific location (file:line, function name)
  • Root cause analysis (8 responsibilities, god object)
  • Concrete refactoring plan (split into 2 modules)
  • Implementation strategy (order of operations)
  • Quantified impact (80% complexity reduction)
  • Expected benefits (testability, parallel development)

4. Context-Aware Analysis
#

Debtmap understands that code doesn’t exist in isolation:

Call Graph Analysis:

execute_workflow()
├─ Called by: 15 functions
├─ Calls: 8 functions
├─ Risk multiplier: 2.3x (widely used)
└─ Priority: CRITICAL (changes affect many callers)

Functions called from many places get higher priority—breaking them affects more of the codebase.

Pattern Detection:

#7 SCORE: 9.12 [CRITICAL]
└─ PathResolver::resolve_qualified_path()
└─ WHY: Function calls 15 methods on external objects,
   only 2 on self
└─ PATTERN: Feature Envy - consider moving to the objects
   being manipulated

Debtmap recognizes architectural anti-patterns like God Objects and Feature Envy, providing architectural insights beyond function-level analysis.

Example: What Debtmap Reports
#

Here’s what debtmap shows on a 120K LOC Rust codebase:

TOP 10 RECOMMENDATIONS

#1 SCORE: 181 [CRITICAL]
└─ type_tracker/mod.rs (3096 lines, 106 functions)
└─ WHY: God object with 42 methods across 8 responsibilities
└─ ACTION: Split into 2 focused modules (detailed plan)
└─ IMPACT: Reduce complexity by 80%

#2 SCORE: 108 [CRITICAL]
└─ detector.rs (2882 lines, 113 functions)
└─ ACTION: Split by analysis phase (detailed plan)

... (8 more prioritized items)

TOTAL DEBT SCORE: 1887
DEBT DENSITY: 15.7 per 1K LOC
OVERALL COVERAGE: 81.33%

Developer experience: “Okay, I’ll tackle #1 this sprint. Clear plan, measurable impact.”

The Debtmap Philosophy
#

Debtmap is designed to answer one question: “What technical debt should I tackle first?”

Unlike tools that flood you with hundreds of alerts, debtmap focuses on:

  • Quality over quantity - 10-20 high-impact items instead of 200+ noise
  • Risk-based prioritization - Combining complexity with test coverage
  • Actionable guidance - Specific refactoring plans, not just warnings
  • Measurable impact - Quantified risk reduction for each recommendation

Recommended Workflow#

Debtmap is designed to complement your existing development workflow:

# 1. Local development loop (before commit)
cargo fmt                    # Format code
cargo clippy                 # Check idioms
cargo test                   # Run tests
debtmap analyze .            # Check for new debt

# 2. CI/CD pipeline (PR validation)
cargo test --all-features    # Full test suite
debtmap validate .           # Enforce debt thresholds

# 3. Weekly planning (prioritize work)
cargo tarpaulin --out lcov   # Generate coverage
debtmap analyze . --lcov target/coverage/lcov.info --top 20
# Review top 20 debt items, plan sprint work

# 4. Monthly review (track trends)
debtmap compare baseline.json current.json

Performance
#

Written in Rust with parallel processing:

  • 1-2 seconds for small projects (<10K LOC)
  • 20-30 seconds for large codebases (100K+ LOC)
  • 10-100x faster than Java/Python-based competitors

Try It Yourself
#

# Install debtmap
cargo install debtmap

# Analyze your project
debtmap analyze .

# With test coverage
cargo tarpaulin --out lcov
debtmap analyze . --lcov target/coverage/lcov.info

Resources
#


Debtmap is open source and free. No enterprise licenses, no usage limits, no lock-in. Just better code quality analysis for everyone.

Related

Automating Documentation Maintenance with Prodigy: A Real-World Case Study
Debtmap - Rust Technical Debt Analyzer
Transforming ripgrep's Documentation with AI Automation and MkDocs
Prodigy - AI Workflow Orchestration for Claude