When “Fair” Isn’t Necessary: Why RevDog Moved Away from Sector-Relative Scoring

RevDog tested sector-relative vs absolute scoring in its V2→V3 transition. With a 0.95+ rank correlation across 111 stocks, results showed normalization mattered far less than regime-aware factor gating—leading to a simpler, more robust V3 design.

One of the more subtle design decisions in quantitative stock analysis is how signals are normalized.

It rarely shows up in performance charts, yet it shapes rankings, factor behavior, and how models react under stress. As RevDog evolved from V2 to V3, one such decision became unavoidable:
Should stocks be scored relative to their sector peers, or against absolute ranges?

This post explains why we moved away from sector-relative normalization in V3—and why, after testing, we chose not to bring it back.


The Two Approaches We Tested

V2: Sector-Relative Scoring

In V2, most factors were normalized within sector peer groups:

How good is this company compared to others in the same sector?

Technically, this meant:

  • Z-scores based on sector medians and dispersion
  • A 20% ROE in Banks ≠ a 20% ROE in Chemicals
  • Sector context was always embedded in the score

This approach has an intuitive appeal. It feels “fair.”


V3: Absolute Scoring

In V3, we simplified normalization using fixed absolute ranges:

Is this metric objectively strong, regardless of sector?

For example:

  • ROE mapped to a fixed 0–30% range
  • ROCE mapped to a fixed 0–40% range
  • A 20% ROE means the same thing everywhere

This removes peer-dependence and aligns more naturally with regime-aware logic.


What We Were Worried About

Dropping sector-relative scoring raises a legitimate concern:

Are we unfairly penalizing sectors that structurally earn lower returns on capital?

For instance:

  • Consumer Durables and Chemicals often run lower ROE
  • FMCG structurally earns very high ROE
  • Absolute scoring could “over-reward” FMCG and “under-reward” others

Rather than debate this philosophically, we tested it.


What the Data Actually Showed

We compared V2 (sector-relative) and V3 (absolute) rankings across:

  • 111 stocks
  • 18 sectors
  • Current production universe

The headline result was striking:

Rank correlation: 0.953

In plain terms:

  • Over 95% of the ordering stayed the same
  • The system’s worldview did not change

Average rank movement was ~7–8 positions—noticeable, but not disruptive.


Where Differences Did Appear (and Why)

The differences were systematic, not random.

FMCG stocks ranked higher under absolute scoring

Examples: DOMS, Jyothy Labs, Emami

Why?

  • FMCG median ROE is ~25%
  • Absolute ranges reward this directly
  • Sector-relative scoring compresses them back to “average”

Consumer Durables & Chemicals ranked lower

Examples: Crompton, Campus, Bayer Crop

Why?

  • Sector medians are closer to ~14–17% ROE
  • Sector-relative scoring boosts them
  • Absolute scoring treats “good for the sector” as merely “okay”

This wasn’t noise. It was a design choice made visible.


Why We Still Chose Absolute Scoring

The key realization was this:

Normalization matters less than regime logic.

In V3, the primary decision is not how a factor is scaled—it’s whether the factor is trusted at all.

  • In Risk-Off regimes, ROE and ROCE are gated entirely
  • In Risk-On regimes, absolute strength matters more than peer fairness
  • Regime-aware factor selection dominates normalization effects

Once factor gating is in place, sector-relative normalization adds complexity without proportional benefit.


The Institutional Lens

From a professional PM’s perspective, the question is simple:

Does added complexity materially improve decisions?

In this case:

  • Ranking stability remained extremely high
  • Differences were explainable and stable
  • Regime logic drove outcomes far more than normalization choice

So we chose the simpler, more robust model.


The Design Principle We Locked In

RevDog now follows a clear hierarchy:

  1. Regime determines which signals are allowed to matter
  2. Normalization only fine-tunes signals that already passed the regime filter
  3. Complexity must earn its keep

Sector-relative scoring did not meet that bar at this stage.


Could This Change in the Future?

Possibly—but only under one condition:

If sector-relative normalization demonstrably improves regime-conditioned outcomes, not just fairness optics.

Until then, absolute scoring remains the cleaner, more disciplined choice.


Closing Thought

Quant systems don’t fail because they lack sophistication.
They fail because they carry unnecessary sophistication.

In moving from V2 to V3, we didn’t remove nuance—we removed fragility.
And in a regime-aware framework, robustness matters more than elegance.


Appendix: V2 vs V3 Normalization — Correlation & Impact Summary

This appendix documents the empirical comparison between sector-relative normalization (V2) and absolute range normalization (V3), and why the latter was retained.

This post is for subscribers only

Already have an account? Sign in.

Subscribe to RevDog AI

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe