The GEAT department welcomes Dr. Harold Brooks, National Severe Storms Laboratory, as our next seminar speaker.
Title: Things we need to think about as we move to high resolution forecasts
When: Tuesday, October 10 @ 4:10 pm
Where: 2050 Agronomy Hall
Bio: Dr. Brooks is a research meteorologist and Senior Scientist in the Forecast Research and Development Division at the National Severe Storms Laboratory (NSSL) in Norman, Oklahoma. He majored in physics and math at William Jewell College and graduated in 1982, with a year at the University of Cambridge studying archaeology and anthropology. His master’s degrees are from Columbia University in Atmospheric Sciences. He has a Ph.D. from the University of Illinois at Urbana-Champaign in Atmospheric Sciences. After graduating from Illinois, he was a National Research Council Research Associate at NSSL and joined the permanent staff there in 1992. During his career, his work has focused on why, when, and where severe thunderstorms occur and what their effects are. He has been an author on two IPCC Assessment Reports and a US Climate Change Science Program report on extreme weather. He organized the Weather Ready Nation workshop to identify scientific priorities for severe weather forecasting in 2012.. He received the United States Department of Commerce’s Silver Medal in 2002 for his work on the distribution of severe thunderstorms in the United States, the NOAA Administrator’s Award in 2007 for work on extreme weather and climate change, and the Daniel L. Albritton Award for Outstanding Science Communicator in 2012 from NOAA’s Oceanic and Atmospheric Research. He is a Fellow of the American Meteorological Society.
Abstract: The challenges to creating high-resolution forecasts over a wide range of time scales are relatively limited. Current and near-future technology makes this capability a near certainty. As the National Weather Service transitions towards such high-resolution time and space forecasts, there is significant momentum to use that technological capability to overhaul the current forecast structure, particularly for forecasts of “high-impact” weather. It is important to consider the current status of the hazardous weather forecasting system and how changes might affect users of the system. I’ll discuss a few of the important issues.
For many hazards, there are two primary stages of forecast information, the watch and the warning (in US terminology). Watches and warnings should not be thought of as arbitrary definitions imposed by the bureaucratic infrastructure. Rather, it is important to note that the distinction between them can be interpreted as physically based, constrained by the scale of the forecasts. For a weather event of interest, watches tend to be on time and space scales greater than the typical scales of the event, while warnings are on comparable scales to the lifecycle of the event. As an example, tornado and severe thunderstorm watches last for several hours and cover tens of thousands of square kilometers, while warnings last for tens of minutes and cover a few hundred square kilometers, on the order of the temporal and spatial extent of the individual severe thunderstorm event. The implications of this spatiotemporal difference are significant. The watch scale could be thought of as the scale to assess threat and prepare, while the warning scale is the scale on which protective action is likely to be needed. As a result of this difference, it is also likely that the way that forecasts are expressed should be different. I would speculate that it’s possible that spatial probabilities may be appropriate on the watch and longer scale, but temporal probabilities more appropriate on the warning scale.
As changes are contemplated in the basic nature of forecast delivery, it’s critical to consider the difference between quality and value of forecasts, as discussed by Murphy in his 1993 essay on forecast goodness. Quality involves the relationship between the forecast and the observed event, while value measures the benefits or costs to forecast users making decisions on the basis of the forecast. It’s important to note that improving the quality does not necessarily improve the value of the forecast and may, in fact, lower the value. As we develop the capability of providing more information to a variety of users, care must be taken in how that information is conveyed. There is no reason to believe, a priori, that more information is better for all users. Frequently updated forecasts, particularly with high spatial gradients can be expected to cause great confusion for many users. Nor should we expect users to be particularly sophisticated in their use of forecast information, given that a large variety of other influences exist in their decision-making processes.
Fundamentally, our understanding of how forecasts are used currently is weak and, often, based on anecdote. Changing a system to “make it better” without cursory knowledge of the current state of the system is suspect. Also, many current metrics of forecast performance do not align with things that are likely to be associated with value of the forecasts and may, in fact, be orthogonal to value. Finally, changing forecasts that are distributed to the public without having done testing of those products prior to the distribution (e.g., storm-based warnings, “impact-based” warnings) can lead to even greater problems than continuing the status quo. Radically changing products without testing opens the possibility of having to change them again, creating confusion and lowering trust.