🚨 Forecast Evaluation, Cross-Validation, and the Hidden Leakage Problem' has been published on Medium
When evaluating forecasting models, most practitioners use metrics like RMSE or MAE. But competitions like M4 and M5 popularized scaled error metrics such as MASE (Mean Absolute Scaled Error) and RMSSE (Root Mean Squared Scaled Error). These have a big advantage: they normalize errors by the typical variation in each series, making scores comparable across series with different scales.
So far so good. But what happens when you want to use cross-validation (CV) with these metrics?
This is where things get subtle — and where you might accidentally introduce data leakage into your evaluation.
Keep reading with a 7-day free trial
Subscribe to Valeriy’s Substack to keep reading this post and get 7 days of free access to the full post archives.