Valeriy’s Substack

Valeriy’s Substack

🚨 Forecast Evaluation, Cross-Validation, and the Hidden Leakage Problem' has been published on Medium

Valeriy Manokhin's avatar
Valeriy Manokhin
Sep 25, 2025
∙ Paid
1
Share

When evaluating forecasting models, most practitioners use metrics like RMSE or MAE. But competitions like M4 and M5 popularized scaled error metrics such as MASE (Mean Absolute Scaled Error) and RMSSE (Root Mean Squared Scaled Error). These have a big advantage: they normalize errors by the typical variation in each series, making scores comparable across series with different scales.

So far so good. But what happens when you want to use cross-validation (CV) with these metrics?

This is where things get subtle — and where you might accidentally introduce data leakage into your evaluation.

Keep reading with a 7-day free trial

Subscribe to Valeriy’s Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Valeriy Manokhin
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture