Please note that this session was withdrawn and is no longer available in the respective programme. This withdrawal might have been the result of a merge with another session.

ST4.3
The definition of Skill Scores and other Verification Metrics to assess the potential value of existing research models for Space Weather user-oriented services
Convener: Ishii Mamoru | Co-conveners: Jens Berdermann, Margit Haberreiter, David R. Jackson, Juha-Pekka Luntama

Near-real time verification metrics are highly important in monitoring the performance of operational space weather services (whether based on numerical models, data analysis tools or other approaches). As use of such services has grown, activity in developing appropriate verification metrics has also grown, often adopting techniques already used in Numerical Weather Prediction. These metrics are primarily used for monitoring, but the use of them to help decide whether a new model (or an upgrade to an existing model or analysis tool) should be adopted for operational use has to date been quite limited. In particular, skill scores can be a particularly effective when using the existing operational system as a benchmark. In this session we invite contributions that discuss latest results in this field, and particularly welcome results of recent intercomparison studies carried out on space weather forecasts made by different models to techniques.