text
stringlengths
1
298
Proper Scoring Rules for Multivariate Probabilistic Forecasts
based on Aggregation and Transformation
Romain Pic1, Clément Dombry1, Philippe Naveau2, and Maxime Taillardat3
arXiv:2407.00650v1 [stat.ME] 30 Jun 2024
1Université de Franche Comté, CNRS, LmB (UMR 6623), F-25000 Besançon, France
2Laboratoire des Sciences du Climat et de l’Environnement, UMR 8212, CEA-CNRS-UVSQ, EstimR, IPSL & U
Paris-Saclay, Gif-sur-Yvette, France
3CNRM, Université de Toulouse, Météo-France, CNRS, Toulouse, France
July 2, 2024
Abstract
Proper scoring rules are an essential tool to assess the predictive performance of probabilistic forecasts.
However, propriety alone does not ensure an informative characterization of predictive performance and
it is recommended to compare forecasts using multiple scoring rules. With that in mind, interpretable
scoring rules providing complementary information are necessary. We formalize a framework based on
aggregation and transformation to build interpretable multivariate proper scoring rules. Aggregation
and-transformation-based scoring rules are able to target specific features of the probabilistic forecasts;
which improves the characterization of the predictive performance. This framework is illustrated through
examples taken from the literature and studied using numerical experiments showcasing its benefits. In
particular, it is shown that it can help bridge the gap between proper scoring rules and spatial verification
tools.
1 Introduction
Probabilistic forecasting allows to issue forecasts carrying information about the prediction uncertainty. It
has become an essential tool in numerous applied fields such as weather and climate prediction (Vannitsem
et al., 2021; Palmer, 2012), earthquake forecasting (Jordan et al., 2011; Schorlemmer et al., 2018), electricity
price forecasting (Nowotarski and Weron, 2018) or renewable energies (Pinson, 2013; Gneiting et al., 2023)
among others. Moreover, it is slowly reaching fields further from "usual" forecasting, such as epidemiology
predictions (Bosse et al., 2023) or breast cancer recurrence prediction (Al Masry et al., 2023). In weather
forecasting, probabilistic forecasts often take the form of ensemble forecasts in which the dispersion among
members captures forecast uncertainty.
The development of probabilistic forecasts has induced the need for appropriate verification methods.
Forecast verification fulfills two main purposes: quantifying how good a forecast is given observations avail
able and allowing one to rank different forecasts according to their predictive performance. Scoring rules
provide a single value to compare forecasts with observations. Propriety is a property of scoring rules that
encourages forecasters to follow their true beliefs and that prevents hedging. Proper scoring rules allow to
assess calibration and sharpness simultaneously (Winkler, 1977; Winkler et al., 1996). Calibration is the
statistical compatibility between forecasts and observations. Sharpness is the uncertainty of the forecast
itself. Propriety is a necessary property of good scoring rules, but it does not guarantee that a scoring rule
provides an informative characterization of predictive performance. In univariate and multivariate settings,
numerous studies have proven that no scoring rule has it all, and thus, different scoring rules should be used
to get a better understanding of the predictive performance of forecasts (see, e.g., Scheuerer and Hamill
2015; Taillardat 2021; Bjerregård et al. 2021). With that in mind, Scheuerer and Hamill (2015) "strongly
recommend that several different scores be always considered before drawing conclusions." This amplifies
1
the need for numerous complementary proper scoring rules that are well-understood to facilitate forecast
verification. In that direction, Dorninger et al. (2018) states that: "gaining an in-depth understanding of
forecast performance depends on grasping the full meaning of the verification results." Interpretability of
proper scoring rules can arise from being induced by a consistent scoring function for a functional (e.g.,
the squared error is induced by a scoring function consistent for the mean; Gneiting 2011), knowing what
aspects of the forecast the scoring rule discriminates (e.g., the Dawid-Sebastiani score only discriminates
forecasts through their mean and variance; Dawid and Sebastiani 1999) or knowing the limitations of a cer
tain proper scoring rule (e.g., the variogram score is incapable of discriminating two forecasts that only differ
by a constant bias; Scheuerer and Hamill 2015). In this context, interpretable proper scoring rules become
verification methods of choice as the ranking of forecasts they produce can be more informative than the
ranking of a more complex but less interpretable scoring rule. Section 2 provides an in-depth explanation of
this in the case of univariate scoring rules. It is worth noting that interpretability of a scoring rule can also
arise from its decomposition into meaningful terms (see, e.g., Bröcker 2009). This type of interpretability
can be used complementarily to the framework proposed in this article.
Scheuerer and Hamill (2015) proposed the variogram score to target the verification of the dependence
structure. The variogram score of order p (p > 0) is defined as
d
VSp(F,y) =
i,j=1
wij (EF [|Xi −Xj|p] −|yi −yj|p)2,
where Xi is the i-th component of the random vector X ∈ Rd following F, the wij are nonnegative weights
and y ∈ Rd is an observation. The construction of the variogram score relies on two main principles. First,
the variogram score is the weighted sum of scoring rules acting on the distribution of Xi,j = (Xi,Xj)
and on paired components of the observations yi,j. This aggregation principle allows the combination of
proper scoring rules and summarizes them into a proper scoring rule acting on the whole distribution F and
observations y. Second, the scoring rules composing the weighted sum can be seen as a standard proper
scoring rule applied to transformations of both forecasts and observations. Let us denote γi,j : x → |xi−xj|p
the transformation related to the variogram of order p, then the variogram score can be rewritten as
d
VSp(F,y) =
i,j=1
wijSE(γi,j(F),γi,j(y)),
where SE(F,y) = (EF[X]−y)2 is the univariate squared error and γi,j(F) is the distribution of γi,j(X) for
Xfollowing F. This second principle is the transformation principle, allowing to build transformation-based
proper scoring rules that can benefit from interpretability arising from a transformation (here, the variogram
transformation γi,j) and the simplicity and interoperability of the proper scoring rule they rely on (here, the
squared error).
We review the univariate and multivariate proper scoring rules through the lens of interpretability and
by mentioning their known benefits and limitations. We formalize these two principles of aggregation and
transformation to construct interpretable proper scoring rules for multivariate forecasts. To illustrate the use
of these principles, we provide examples of transformation-and-aggregation-based scoring rules from both the
literature on probabilistic forecast verification and quantities of interest. We conduct a simulation study to
empirically demonstrate how transformation-and-aggregation-based scoring rules can be used. Additionally,
we show how the aggregation and transformation principle can help bridging the gap between the proper
scoring rules framework and the spatial verification tools (Gilleland et al., 2009; Dorninger et al., 2018).
The remainder of this article is organized as follows. Section 2 gives a general review of verification
methods for univariate and multivariate forecasts. Section 3 introduces the framework of proper scoring
rules based on transformation and aggregation for multivariate forecasts. Section 4 provides examples of
transformation-and-aggregation-based scoring rules, including examples from the literature. Then, Section 5
showcases through different simulation setups how the framework proposed in this article can help build
interpretable proper scoring rules. Finally, Section 6 provides a summary as well as a discussion on the
verification of multivariate forecasts. Throughout the article, we focus on spatial forecasts for simplicity.
2
However, the points made remain valid for any multivariate forecasts, including temporal forecasts or spatio
temporal forecasts.
2 Overviewofverification tools for univariate and multivariate fore
casts
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card