We Extended Our Walk-Forward Window from 6 to 15 Months. Here's What We Found.
We extended our walk-forward factor validation from 6 to 15 months on NSE SmallCap 250. Signal got stronger — IC improved, stability increased, and two new persistent factors emerged that a shorter window couldn't detect.
Today, we demoed our SmallCap Catalyst model to a seasoned institutional investor — someone who's spent over 15 years across public markets, venture capital, and private equity, including stints at some of Asia's most rigorous capital allocators.
He knows how to evaluate investment processes.We walked him through our regime detection, factor scoring, and the gradual exit logic that moves positions from full weight to exit over multiple quarters.
He liked the regime stability. He liked the governance overrides.
Then he asked a question that stopped us:"Why is your forward test window only 6 months when your exit logic takes 2-3 quarters to play out?"
He wasn't questioning the model. He was questioning whether we'd tested it on the same time horizon we were asking investors to commit to.
An allocator's question, not a quant's question.
And he was right — we hadn't tested it.
The Problem We Didn't Know We Had
Our SmallCap Catalyst model scores every stock in the NIFTY SmallCap 250 on fundamentals — ROE, ROCE, revenue growth, promoter quality, dividend yield, valuations.
Factor weights change with the market regime.
Positions don't exit in binary — they decay through FULL → REDUCED → MINIMAL → WATCH → EXIT, requiring two consecutive periods of deterioration before any downgrade.
That decay logic is deliberate.
It prevents whipsaws, reduces turnover, and avoids tax-inefficient knee-jerk exits.
In backtesting, it cut turnover by 47% versus binary exits with identical returns.
But it also creates path dependency.
A position entering the decay path at the start of a quarter might take 2-3 quarters to fully exit.
Our walk-forward validation — 15 months of training, 6 months of testing, rolling every 3 months — was testing whether our factors predicted the next 6 months of returns.
It was never testing whether they predicted returns through a full portfolio cycle.
We were validating entries.
We'd never validated the complete hold.
What We Did
We ran both configurations on identical data, same bias protections:
- 15+6 (our existing setup): 15-month train, 6-month test, 3-month roll. 11 windows.
- 15+15 (the extended test): 15-month train, 15-month test, 6-month roll. 4 windows.
We widened the roll to 6 months for the 15+15 configuration so the test windows wouldn't overlap too heavily.
Both configurations use a 45-day disclosure lag — quarterly results are only used after they would have been publicly available.
No lookahead bias.
The core metric is Information Coefficient (IC): the Spearman rank correlation between factor scores at the start of the test window and 21-day forward returns.
We track mean IC, standard deviation, Information Ratio (mean IC / std IC), hit rate, and persistence (mean IC > 0.03 with hit rate > 60%).
We expected the signal to weaken.
Over 15 months, a lot can change — earnings revisions, sector rotations, macro shifts.
If anything, we thought we'd need to shorten the rebalancing cycle.
We were wrong.
The Signal Gets Stronger
| Metric | 15+6 (11 windows) | 15+15 (4 windows) |
|---|---|---|
| Overall Mean IC | +0.0061 | +0.0082 |
| IC Standard Deviation | 0.0233 | 0.0148 |
| Persistent Factors | 3 | 5 |
The overall IC improved from +0.006 to +0.008.
More importantly, the IC standard deviation dropped 36% — from 0.023 to 0.015.
The signal isn't just maintained over 15 months.
It's more stable.
We went in expecting to defend the 6-month window.
We came out with evidence that the 15-month horizon is actually the better test.
Where the Real Surprise Was
The factor-level results explain why.
| Factor | 15+6 IC | 15+15 IC | 15+15 IR | What happened |
|---|---|---|---|---|
| Revenue QoQ | +0.042 | +0.081 | 1.69 | Strongest factor at 15 months |
| Promoter Holding | +0.002 | +0.058 | 1.21 | From noise to persistent signal |
| ROE | +0.046 | +0.057 | 0.69 | Stronger at 15 months |
| ROCE | +0.033 | +0.043 | 0.88 | Stronger at 15 months |
| Dividend Yield | +0.033 | +0.039 | 0.61 | Consistent across both |
The three factors we already knew — ROE, ROCE, Dividend Yield — held up and got stronger.
That was reassuring but not surprising.
Quality compounds.
High-ROE small-caps continue to outperform as earnings accumulate and the market gradually re-rates.
The surprise was what emerged at 15 months that was invisible at 6.
Revenue QoQ became our strongest factor — IC of +0.081 with an Information Ratio of 1.69.
At 6 months, it was decent (+0.042) but had an inconsistent hit rate of 55%.
At 15 months, hit rate jumped to 75%.
Quarterly revenue momentum in small-caps takes time to express.
The market underreacts to sequential revenue improvement.
Two quarters of acceleration is a trend; five quarters is a re-rating.
The 6-month window only saw the beginning.
The 15-month window captured the payoff.
Promoter Holding went from zero to signal.
At 6 months, IC was +0.002 — literal noise.
At 15 months, it's +0.058 with an IR of 1.21.
This was the finding we didn't expect.
The absolute level of promoter ownership doesn't predict next quarter's return.
It predicts which companies survive and compound over a year.
Promoters with high conviction hold through volatility.
That conviction isn't priced in over 6 months, but it is over 15.
This is particularly significant for Indian small-caps where promoter behavior is the single most important governance signal.
We already penalize promoter exits through our governance overrides.
Now we have evidence that promoter conviction — the absolute holding level — is independently predictive at longer horizons.
What Doesn't Work at Either Horizon
Not everything survived. A few factors that we should flag:
- D/E Ratio (-0.051 at 15+6, -0.045 at 15+15): Leverage doesn't predict returns in this universe. This contradicts textbook factor investing but makes sense — many SmallCap 250 companies are growth-stage businesses where debt funds expansion.
- FII Holding (-0.022 at 15+6, -0.066 at 15+15): Foreign institutional ownership is negatively predictive and worsens at longer horizons. High FII stocks are consensus — already at fair value.
- DII Holding (+0.001 at 15+6, -0.058 at 15+15): Same story. Institutional crowding is a risk factor, not an alpha source, in small-caps.
The institutional flow changes (FII Change at +0.056, DII Change at +0.045 over 15 months) show promise but miss the persistence threshold at 50% hit rate.
Flow momentum might have signal, but it's not consistent enough for us to include.
Window-Level View
The four 15+15 windows span genuinely different markets:
| Window | Test Period | Mean IC | Context |
|---|---|---|---|
| 0 | Oct 2022 – Jan 2024 | -0.004 | Post-correction recovery |
| 1 | Apr 2023 – Jul 2024 | -0.007 | Broad rally, then rotation |
| 2 | Oct 2023 – Jan 2025 | +0.030 | Small-cap outperformance, then correction |
| 3 | Apr 2024 – Jul 2025 | +0.014 | Volatile — elections, global uncertainty |
The earlier windows (0, 1) show flat IC.
The later windows (2, 3) are positive.
This is expected — our fundamental data coverage was thinner in 2022 (fewer quarters of history per stock).
As the dataset deepened, the signal emerged.
The strategy explicitly requires 8 quarters of financial history.
When that requirement is fully met, the factors predict.
What We Changed
Three practical changes to our model based on these findings:
- We've adopted 15+15 as our primary validation benchmark. The 6-month results remain valid for screening, but portfolio-level validation now uses the 15-month forward window. This matches our actual holding period.
- Revenue QoQ and Promoter Holding get higher weight. Both were already in our scoring, but the 15-month evidence justifies increasing their influence — particularly in risk-on and neutral regimes where positions are held longest.
- We've confirmed the decay exit logic. The gradual exit system was designed on the thesis that conviction deterioration should be confirmed over multiple periods. The 15-month IC stability validates this: factors that predict at 15 months reward patience, not speed.
The Takeaway
If you run a factor model that holds positions for more than a quarter, ask yourself: does your validation window actually span a complete portfolio cycle?
If your walk-forward test is shorter than your average holding period, you're testing half a thesis.
In our case, the fund manager's question didn't break the model.
It made it stronger — and surfaced two persistent factors that a 6-month window structurally couldn't detect.
Match your validation horizon to your actual portfolio cycle.
Anything less is a partial test pretending to be a complete one.