Dealing with Uncertainties
Weather forecasts have matured substantially in providing reliable probabilistic predictions, with a useful quantification of forecast uncertainties. Including this information in the communication of forecasts and warnings, and integrating it into downstream models and decision-making processes has become increasingly common practice.
Including uncertainties not only implies the interpretation of ‘raw’ uncertainty information in ensemble forecasts, their post-processing, and visualization, but also the integration of a wide range of non-meteorological aspects such as vulnerability and exposure data to estimate risk and the social, psychological and economic aspects which affect human decision-making.
In this session, we aim to support a holistic perspective on issues that arise when making use of uncertainty information of weather forecasts in decision processes and applications.
We encourage contributions that investigate the application and interpretation of uncertainty information along any of, but not limited to, the following questions:
- How does the quality of the final decision depend on forecast uncertainty and uncertainty from non-meteorological parts of the decision process?
- Where, along the chain from raw forecast uncertainty to the final decision, do the largest uncertainties arise?
- How is the uncertainty information (e.g., from ensemble prediction systems, multi-models etc.) propagated through the production chain up to the final decision?
- How can we tailor information about forecast uncertainty, and its representation, to a given user group, decision process or application?
- How is uncertainty represented best in a given case (e.g., as ensemble members, PDFs, or worst/best case) to reduce complexity and computational or cognitive cost?
- How can we identify the most suitable representation for different user-groups and decision processes?
- How can we incorporate vulnerability and exposure data in a risk-based decision framework?
- How can we evaluate and quantify the value of uncertainty information for decision making in different contexts?
- What strategies help the end-user to interpret the uncertainty in forecasts when making informed decisions?
- What are the benefits of impact-based or risk-based forecasts and warnings in decision-making (including for disaster risk reduction)?
- How can the interaction between scientists and end-users help to overcome reservations about uncertainty forecasts?
- How to apply in weather communication evidence of sociological and psychological factors that affect the interpretation of forecast uncertainties?
- How do we convince weather service providers to include uncertainty information when faced with their concerns that people will not understand it or that it undermines confidence in their services?
Impact-based forecasts (IBF) combine information about the likelihood of severe weather events with information about potential impact severity to provide warnings based on local risk rather than the exceedance of meteorological thresholds. Having adopted IBF in 2011, the UK Met Office’s Severe Weather Warning Service uses a risk matrix in determining warning level (i.e. yellow, amber, red). Within this system a red warning always denotes a high-impact event with high-probability of occurring. However, amber and yellow warnings can denote either high-probability lower-impact events or low-probability high-impact events. While the full risk matrix is provided to emergency responders, public weather warnings are issued using the warning colour only. In this project we investigated whether providing a public audience with additional information about probability and potential impact severity in different formats affects perception of weather risk, trust in the forecast and willingness to act.
An online experiment was conducted with a sample of 550 UK residents through the Qualtrics market research panel. Participants were randomly assigned to one of three conditions: 1) warning colour only (i.e. reflecting current public weather warning communications); 2) warning colour with accompanying statement about probability and impact; or 3) full risk matrix. Participants were presented with wind warnings for events classified as high-probability high-impact (HPHI Red), high-probability moderate-impact (HPMI Amber) and low-probability high-impact (LPHI Amber). Presentation order was randomised. For each warning participant rated perceived risk (anticipated likelihood, anticipated severity, and concern), trust in forecast, and likelihood of undertaking a protective response.
Repeated-measures Analysis of Variance tests found that anticipated likelihood, severity, concern, trust and protective intention were all significantly higher for HPHI Red warnings than HPLI Amber warnings, and all significantly higher for the HPLI Amber warnings than the LPHI Amber warnings. Multivariate Analysis of Variance tests found that, compared to those in the colour-only condition, those shown the risk matrix reported greater anticipated likelihood, severity and concern when shown the HPLI Amber warning, and greater perceived severity when shown the LPHI Amber warning. There was no effect of warning format on response to HPHI Red warnings.
Our findings demonstrate that red warnings evoke high perceived risk, trust and protective intention irrespective of whether any information about probability or impact severity is present. For amber warnings however, we find that low-probability high-impact events evoke lower perceived risk, trust and protective intention, than warnings for high-probability moderate-impact events. This may indicate a tendency for events explicitly classified as being ‘low likelihood’ to be disregarded. While a tendency to be less concerned about low-probability high-impact events may not be objectively ‘wrong’, it does highlight the need for further work to explore how warnings for low-probability high-impact should be communicated when precautionary action is desirable.
How to cite: Taylor, A. and Summers, B.: A Small Chance of a Something Terrible? Does providing additional information about likelihood and impact severity affect public responses to impact-based warnings?, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-173, https://doi.org/10.5194/ems2021-173, 2021.
Communicating the uncertainty associated with forecasts is crucial to effective crisis response management, but it is particularly challenging when time frames are too short to articulate the complexities of the information. However, not communicating uncertainties is problematic. For technical experts, interdependencies amongst event characteristics over time creates evolving uncertainties that may eclipse those associated with modelled outcomes. For the public and emergency decision-makers, the lack of uncertainty awareness may result in future alternative courses of action not being identified and assessed, reducing the efficacy of decisions and action plans. Furthermore, revealing uncertainty can both increase or decrease the credibility and trustworthiness of the communicator. Some individuals will devalue a message when uncertainty is communicated, while others may devalue the message when they expect uncertainty and it has not been communicated. If we are to develop effective ways to communicate uncertainty in a crisis, research needs to understand the reasons for these differences.
Key influences include how perceptions of science, its uncertainty, and the scientific process, act as a lens through which scientific information is interpreted. This lens can warp communicated information, particularly when uncertainty is high: during a crisis, people may not take appropriate safety actions based upon scientific advice if the message contradicts or fails to accommodate, their existing perceptions of the science. Forecasts, warnings, and other communication products must address these existing perceptions if they are to be effective. These perceptions are represented in people’s mental models of how they think the world works, including their model of scientific processes, motivations, beliefs, and values, which vary across disciplines and organizations due to epistemic differences. We will report on the initial findings from a study that a) identifies the appropriate methodology to elicit mental models of science in the public and professional populations, and b) uses this to explore how mental models of scientific uncertainty are held by the public, emergency managers, scientists, engineers, and key decision-makers involved in hazard response. Our aim is to identify the shared concepts underlying these mental models, so forecast messaging can be effectively crafted to include uncertainty in a way that aligns with individuals’ mental models. Through this we offer strategies to enhance individual decision-making under uncertainty in ways that develop the trust that the public and decision-makers have in forecasts.
How to cite: Hudson-Doyle, E., Harrison, S., Hill, S., Williams, M., Paton, D., Bostrom, A., and Becker, J.: Eliciting mental models to understand how different individuals affected by disaster risk understand science, and scientific uncertainty, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-126, https://doi.org/10.5194/ems2021-126, 2021.
In the context of probabilistic forecasting, low probabilities are known to be often underestimated by forecast users. This underestimation of low probabilities may have severe consequences, if people fail to take adequate precautions to protect against high-impact events like storms.
One solution is to communicate higher probabilities by lowering the forecasts’ spatial resolution: the lower the resolution, the higher the probability that the event will occur within the area. At the same time, a lower resolution entails more uncertainty about where exactly the event could occur. Thus, whereas a lower forecast resolution may heighten forecast users’ risk perception through larger probabilities, an increase in spatial uncertainty could reverse this effect.
In an online experiment, we investigate the effects of forecast resolution and spatial uncertainty on risk perception and precautionary decisions (N = 149). For 12 probabilistic thunderstorm forecasts, participants (i) entered how likely they believed their location would be hit by a thunderstorm, and (ii) decided whether to host an outdoor event at that location at a risk of a high loss or cancel in advance at a smaller cost (blocks randomized).
We find that a lower forecast resolution significantly reduced how likely participants believed to be hit and how often they chose to protect against thunderstorms. At the same time, higher forecast probabilities increased participants’ risk perception and likelihood to take precautionary action. Furthermore, the interpretations of the forecast’s spatial reference assessed in multiple choice formats were only a rough proxy for experimentally observed perceptions of risk.
The results constitute a starting point for investigating the trade-off between forecast probability and spatial uncertainty. They also reiterate that the spatial resolution forecasters choose for their products directly influence forecast users’ risk perception and behavior.
How to cite: Gubernath, J. and Fleischhut, N.: Spatial resolution of probabilistic forecasts and its impact on risk perception and precautionary action, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-373, https://doi.org/10.5194/ems2021-373, 2021.
One key strategy to fight climate change worldwide is to invest in renewable energy sources (RES) and increase their integration into the power system. In recent years, we observed how extreme weather conditions, together with growing penetration levels of RES, are increasingly affecting the power system operation and planning, as well as electricity markets. The inherent uncertainty of such events and the associated uncertainty in the power generation from RES can no longer be ignored by the energy industry. In other words, current deterministic methods have reached their limit due to the inherent inability to model and convey forecast uncertainties.
Probabilistic information and forecasts have been shown to improve decision-making in many weather-related processes. By dealing with uncertainties, the end-user takes responsibility, but also gets the possibility to harvest the benefits of knowing and being able to calculate what is at stake. Last but not least, knowing the uncertainty of an event in advance opens the possibility to act upon such uncertainty rather than acting on the event itself and thereby mitigating costly side effects or being able to secure safety.
In 2020, the IEA Wind Task 36 “Wind Energy Forecasting” has for this reason started an initiative “Probabilistic Forecasting Games and Experiments” in collaboration with the Max-Planck Institute for Human Development. The main goal of this initiative is to empirically investigate the psychological barriers to the adoption of probabilistic forecasts and to enable stakeholders to understand and explore their benefit and use. With the initiative, the IEA Wind Task 36 wants to establish interdisciplinary teams to promote testing and playing with forecast games and experiments to give end-users a “feel” of where the hidden possibilities are to improve decisions and developers a platform to:
the energy and meteorology community for the development, deployment and communication of uncertainties of weather and energy forecasts to end-users for better decision making.
The task leaders have started to setup a platform with a list of forecasting games and experiments developed by the task, in cooperation or by cooperating institutions, researchers or companies as well as invite others outside the tasks community to share links or data to games and experiments.
The initiative will be presented and the first experience with the task’s own games and experiments briefly discussed. The many open questions and considerations when looking forward towards the establishment of training and educational tools for probabilistic forecasts will be formulated and posed to the meteorological and psychological/behaviorism research community to enhance the collaboration and establish a stronger link for this interdiciplinary work.
How to cite: Möhrlen, C., Bessa, R., and Giebel, G.: IEA Wind Task 36 “Probabilistic Forecasting Games and Experiments” Initiative, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-500, https://doi.org/10.5194/ems2021-500, 2021.
Effective communication of potential weather hazards and its uncertainty to the general public is key to prevent and mitigate negative outcomes from weather hazards. The general public needs effective tools at hand that can allow them to make the best decision as possible during a severe weather event. Currently, there are many approaches for weather forecast visualization, such as contour and thematic maps . However, guidelines and best practices in visualization can help to improve these designs and make them more effective [1, 2].
In this work, we present several interactive visual designs for mobile visualization of severe weather events for the communication of weather hazards, their risks, uncertainty, and recommended actions. Our approach is based on previous work on uncertainty visualization , cognitive science , and decision sciences for risk management [3, 4]. We propose six configurations that vary the ratio of text vs graphics used in the visual display, and the interaction workflow needed for a non-expert user to make an informed decision and effective actions. Our goal is to test how efficient these configurations are and to what degree they are suitable to communicate weather hazards, associated uncertainty, risk, and recommended actions to non-experts. Future steps include two cycle of evaluations, consisting of a first pilot to rapidly test the prototype with a small number of participants, collect actionable insights, and incorporate potential improvements. In a second user study, we will perform a crowd-sourced extensive evaluation of the visualization prototypes.
 A. Diehl, A. Abdul-Rahman, M. El-Assady, B. Bach, D. A. Keim, and M. Chen. Visguides: A forum for discussing visualization guidelines. In Proceedings of the EuroVis Short Papers, pages 61–65, 2018.
 A. Diehl, E. E. Firat, T. Torsney-Weir, A. Abdul-Rahman, B. Bach, R. S. Laramee, R. Pajarola, and M. Chen. VisGuided: A community-driven approach for education in visualization. In Proceedings Eurographics Education Papers, to appear, 2021.
 N. Fleischhut and S. M. Herzog. Wie laesst sich die unsicherheit von vorhersagen sinnvoll kommu- nizieren? In Wetterwarnungen: Von der Extremereignisinformation zu Kommunikation und Handlung. Beiträge aus dem Forschungsprojekt WEXICOM, pages 63–81. 2019.
 G. Gigerenzer, R. Hertwig, E. Van Den Broek, B. Fasolo, and K. V. Katsikopoulos. “A 30% chance of rain tomorrow”: How does the public understand probabilistic weather forecasts? Risk Analysis: An International Journal, 25(3):623–629, 2005.
 I. Kübler, K.-F. Richter, and S. I. Fabrikant. Against all odds: multicriteria decision making with hazard prediction maps depicting uncertainty. Annals of the American Association of Geographers, 110(3):661–683, 2020.
 L. M. Padilla, I. T. Ruginski, and S. H. Creem-Regehr. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data. Cognitive research: principles and implications, 2(1):1–16, 2017.
How to cite: Pandya, A., Popovic, N., Diehl, A., Ruginski, I., Fabrikant, S., and Pajarola, R.: Leveraging Different Visual Designs for Communication of Severe Weather Events and their Uncertainty, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-266, https://doi.org/10.5194/ems2021-266, 2021.
Forecasting the potential for flood-producing precipitation and any subsequent flooding is a challenging task; the process is highly non-linear and inherently uncertain. Acknowledging and accounting for the uncertainty in precipitation and flood forecasts has become increasingly important with the move to risk-based warning and guidance services which combine the likelihood of flooding with the potential impact on society and the environment.
A standard approach to accounting for uncertainty is to generate ensemble forecasts. Here the national Grid-to-Grid (G2G) model is coupled to a Best Medium Range (BMR) ensemble which consists of three models spanning different time horizons: an ensemble nowcast for the first 6h, which is blended with the short-range 2.2 km Met Office Global Regional Ensemble Prediction System (MOGREPS-UK) ensemble up to 36h and the ~20 km global MOGREPS-G up to day 6. The G2G model is driven by 15-minute accumulations on a 1 km grid.
16-months of precipitation and river flow ensemble forecasts have been processed to develop and assess a joint verification framework which can facilitate the evaluation of the end-to-end forecasting chain. Analysis concluded the following: (1) daily precipitation accumulations provide the best guidance in terms of rain volume for hydrological impacts. One reason may be because it removes the impact of timing errors at the sub-daily scale. However, sub-daily precipitation can be more closely related to river flow on an ensemble member-by-member basis. (2) Observation uncertainty is important. The same forecasts verified against three different observed precipitation sources (raingauge, radar or merged) can provide markedly different results and interpretations. G2G river flow performance can also be affected, when driven by these datasets rather than forecasts. (3) The change in precipitation-intensity with model is evident and has an impact on downstream modelling and verification. (4) The period used for ensemble verification should be at least two years. The 16-month test period was sufficient for generating enough precipitation threshold-exceedances for the 95th percentile: but insufficient for higher thresholds and for river flow thresholds above half the median annual maximum flood at sub-regional scales. (5) A new method of presenting Time-Window Probabilities (TWPs) has been developed for precipitation thresholds that are hydrologically relevant. Verification of these shows that probabilities are larger, and more reliable so that users can have greater confidence in them. (6) Overall precipitation forecast skill was far more uniform than for river-flow, primarily because the atmosphere is a continuum whilst catchments are finite and subject to external, non-atmospheric factors including antecedent moisture. (7) Though G2G can be sensitive to precipitation outliers, the precipitation ensemble is generally under-spread and spread does not appear to amplify or propagate to enhance the river flow ensemble spread, so spread is reduced rather than increased in the downstream application.
How to cite: Mittermaier, M., Anderson, S., Crocker, R., Cole, S., Moore, R., and Cole, S.: End-to-end verification of the ensemble precipitation-to-river-flow forecasting chain: how to maximise skill for the user and does the uncertainty propagate?, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-490, https://doi.org/10.5194/ems2021-490, 2021.
At MET Norway, a small team - the Sandbox - are spending part of their working hours trying to improve the communication between the meteorologists and the audience. At the moment the Sandbox consists of seven forecasters, researchers, and communication advisors with backgrounds from both natural and social sciences. Over the past five years, this has been a proven format to address interdisciplinary and cross-departmental challenges. For example, the team has been working with ideas related to establishing an editorial team and a podcast. Suggestions from the Sandbox are delivered directly to the leadership who then decide what to follow up on or not.
This year, the team is working with two well-known and related challenges to the meteorological community: i) how can we help the meteorologists in the transition from mainly using deterministic model data, to mainly using probabilistic data from ensemble prediction systems?, and ii) how can we aid and develop more effective communication with respect to forecast uncertainty?
The work is set up as a one-year project, with a mix of two-day meetings working with a specific topic, and monthly two-hour follow-up meetings in between. A thematic plan related to uncertainties was agreed upon at the start of 2021 and covers a varied but coherent set of challenges, and includes a set of concrete aims and deliverables, and how to actually work together (with respect to methods as well as the Covid-19 restrictions). For the two-day meetings, the selected topics were 1. Insight into the existing mind-sets and tools/data the forecasters have, 2. Communication and language, 3. Technical insight in strengths and weaknesses of ensemble models and products, and 4. Communication and visualization. The main deliverables of the project are to provide a toolbox and a repository of knowledge regarding the operational use of ensemble data and communication of uncertainty. The toolbox and repository will be available for forecaster training and operations, as well as form a basis to develop future research and outreach activities.
At the annual EMS meeting September 2021 we will reflect on the work methods that the Sandbox uses, and present first outcomes of the project, e.g. more concrete information about what the toolbox and repository of knowledge will look like.
How to cite: Sivle, A. D., Jeuring, J., Svehagen, M.-L. F., Ovhed, M., Hagen, B., Gislefoss, K., and Borander, P.: The Sandbox: How can we help forecasters to deal with uncertainties?, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-81, https://doi.org/10.5194/ems2021-81, 2021.
Within the forecasters community, it is a common approach to make use of meteorological « conceptual models » to synthetise weather situation key processes. These "models" come along with general physical rules that can help, to understand options, particularly in a high-impact situation with low predictability. At the beginning, it helps to build an appropriate scenario in the decision-process.
Forecasting uncertainty is most of the time considered through a "multi-model" approach, taking into account synoptic and convective scale model outputs, to hightlight the main patterns of uncertainty sources. This subset of model solutions is often called « poor-man-ensemble ». Ensembles provide plenty of relevant information : not only classical probabilities, quantiles, meteograms, but also more subjective visualisations like post-stamps or spaghetti, easy to interpret for forecasters. In the end, all of these informations are mixed to produce or to design mentally the most likely scenario.
Forecasters experience includes a kind of a subjective pseudo climatology, where each one can find references, making use of analogs to understand the meteorological context, the behaviour of the numerical systems in terms of systematic errors, but also to overcome the consequences linked with vulnerability, or to learn from the reaction of our different end-users facing uncertainty. Especially with civil protection, forecasters get trained to develop practical strategies in strong uncertainty contexts.
Within an operational forecasting team, real-time decisions can be affected by high stress levels, as well as collective or individual cognitive biases in uncertainty interpretation. We give here some illustrations from past high impact forecasting situations.
How to cite: Guillemot, F.: How is uncertainty represented best to reduce complexity or cognitive cost in high-impact events., EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-350, https://doi.org/10.5194/ems2021-350, 2021.
In the world meteorological organization's (WMO) guidelines on ensemble prediction, the WMO warns about ignoring uncertainty in forecasts, even if an end-user receives a deterministic forecasts. The WMO argues that “...if a forecaster issues a deterministic forecast the underlying uncertainty is still there, and the forecaster has to make a best guess at the likely outcome. Unless the forecaster fully understands the decision that the user is going to make based on the forecast, and the impact of different outcomes, the forecaster's best guess may not be well tuned to the real needs of the user”.
It is this gap in the basic understanding of uncertainty inherent in forecasts that lead to wrong assumptions among end-users with little or no experience in basic meteorology or atmospheric science. Mistrust in forecasts and forecasting methods including uncertainty methodologies often stem from a wrong expectation on the quality of forecasts for a specific problem. If uncertainty forecasts should find their way into the power industry's weather related decision making, a deeper understanding of weather uncertainty, the way weather services produce uncertainty of weather forecasts, and how such forecasts are to be translated into end-user applications is required.
In order to shed some light into the gaps and pitfalls of uncertainty forecasts in the energy sector we want to present a use-case and highlight some of the many advantages of applying uncertainty forecasts in power system applications.
The use case is a probabilistic warning system for a system operator in an area with high penetration of wind power, where regular high wind speeds (> 20m/s) lead to shutdown of wind turbines over large areas of the transmission grid, causing large costs due to reserve requirements, if such events are not predicted a number of days in advance.
The high-speed shutdown warning system will be explained and practical experience from the evaluation of high-speed shutdown events will be discussed. The alert system provided to a transmission system operator showed that it is absolutely crucial for the end-user to understand the alerts, receive the information in different ways (text and graphics) in order to be capable of relating the warnings to the impact it may have on their grid planning. The impact of a false alarm needs to be evaluated carefully, decided upon and documented in the design phase, so that the end-users have a clear reference system to relate an alert to. Technically, the frequency of the alert generation is easy to adjust, but difficult to design efficiently from a human perspective. It is crucial to not overload end-users with warnings.
The many technical and behavioural communication aspects that need consideration in such a design and our experience with a real application will be described and discussed.
How to cite: Möhrlen, C.: Communicating warnings for extreme events: use-case of a high-speed shutdown warning system for a system operator with high wind power penetration, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-495, https://doi.org/10.5194/ems2021-495, 2021.
Ensemble Prediction Systems (EPSs) are now run routinely by many global weather centres but, despite the enormous potential these forecasts offer, their perceived complexity has long presented a barrier to effective adoption by many users; limiting the opportunity for early decision-making by industry. To facilitate the interpretation of a set of (potentially seemingly contradictory) forecasts, a sensible approach is to turn the prediction into a binary (yes/no) forecast by applying a user-relevant operational weather limit – with the decision to proceed with or postpone an operation based on whether a certain proportion of the members predict un-/favourable conditions. However, the question then remains as to how the appropriate probability threshold to achieve an optimum decision can be objectively defined. Here, we present two approaches for simplifying the interpretation of ensemble (probabilistic) ocean wave forecasts out to 15 days ahead, as pioneered – in operation – in Summer 2020 to support the recent weather sensitive installation of the first phase of a 36 km subsea pipeline in the North Sea. Categorical verification information was constructed from 1460 archive wave forecasts, issued for the two-year period 2017 to 2018, and used to characterise the past performance of the European Centre for Medium-Range Weather Forecasts (ECMWF) EPS in the form of Receiver Operating Characteristic and Relative Economic Value analysis. These data were then combined with a bespoke parameterization of the impact of adverse weather on the planned operation, allowing the relevant go/no-go ensemble probability threshold for the interpretation of future forecasts to be determined. Trials on an unseen nine-month period of data from the same site (Spring to Autumn 2019) confirm the approaches facilitate a simple technique for processing/interpreting the ensemble forecast, able to be readily tailored to the particular decision being made. The use of these methods achieves a considerably greater value (benefit) than equivalent deterministic (single) forecasts or traditional climate-based options at all lead times up to 15 days ahead, promising a more robust basis for effective planning than typically considered by the offshore industry. This is particularly important for tasks requiring early identification of long weather windows (e.g. offshore pipeline installation), but similarly relevant for maximising the exploitation of any ensemble forecast – by any sector – providing a practical approach for how such data are handled and used to promote safe, efficient and successful operations.
How to cite: Mylne, K., Steele, E., Brown, H., Bunney, C., Gill, P., Saulter, A., and Standen, J.: Using probabilistic forecasts to effectively enhance operational marine industry decision-making, EMS Annual Meeting 2021, online, 6–10 Sep 2021, EMS2021-237, https://doi.org/10.5194/ems2021-237, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.