Skip to main content
Unpublished Paper
Case studies in evaluating time series prediction models using the relative mean absolute error
  • Nicholas G Reich, University of Massachusetts - Amherst
  • Justin Lessler, Johns Hopkins University
  • Krzysztof Sakrejda, University of Massachusetts - Amherst
  • Stephen A Lauer, University of Massachusetts - Amherst
  • Sopon Iamsirithaworn
  • Derek A T Cummings, Johns Hopkins University
Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this paper, we present a framework for evaluating time series predictions that emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naive reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models' predictive performance across different sets of data. We illustrate the use of this metric with two case studies: (1) comparing predictions of the Dow Jones Industrial Average and the NASDAQ stock indices, and (2) comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. These examples demonstrate the utility and interpretability of the relative mean absolute error metric in practice, and underscore the practical advantages of using relative performance metrics when evaluating predictions.
Publication Date
Citation Information
Nicholas G Reich, Justin Lessler, Krzysztof Sakrejda, Stephen A Lauer, et al.. "Case studies in evaluating time series prediction models using the relative mean absolute error" (2015)
Available at: