6 Common Pitfalls for Forecast Evaluation

Posts

A topic I covered last year in some talks and papers are the “6 common pitfalls for forecast evaluation”. I’m discussing what are the most typical mistakes people new to forecasting would make. So this is relevant, for example, for Data Scientists that may not have any specialised training in forecasting, but in ML and Stats. It is a more lightweight take on the same topic covered in our quite detailed and more formal full paper here.

I have a pdf with the slides and a recording (regretfully not containing the slides) here and also just uploaded the paper I wrote for the Foresight journal on the topic to my web page here. The Foresight paper is an easy read for practitioners.

The common pitfalls were the following:

  1. Datasets too small / irrelevant
  2. Data leakage
  3. Not using adequate benchmarks
  4. Wrongly used or ad-hoc evaluation measures
  5. Reliance on forecast plots in rolling-origin evaluations for other things than
  6. Assumption that a forecast needs to be a realistic scenario

Papers