The Quixotic Quest for the Perfect Weatherman

Don’t blame the weatherman for wrong forecasts. Blame chaos theory.
The Quixotic Quest for the Perfect Weatherman
A hydro-meteorological radar of weather forecast service Meteo-France in Vars, French Alps, on June 26, 2015. Jean-Pierre Clatot/AFP/Getty Images
Jonathan Zhou
Updated:

Like so many things in the modern world, the origins of scientific weather forecasting lay in the Renaissance. In 1450, the German mathematician Nicholas of Cusa first wrote down a description of the hygrometer, a scale that measures the amount of moisture in the air. Thirty years later, Leonardo da Vinci built a rough prototype of the device.

Nearly 200 years later, the Italian Evangelista Torricelli invented the barometer, which measures atmospheric pressure. The 19th century saw a leap forward in weather prediction as telegraphs enabled meteorological observations to be relayed across continents in real time, and computers in the mid-20th century ushered in the large-scale number-crunching forecasting techniques that have come to dominate the profession.

The centuries-old craft is still being perfected today.

NASA’s Latest Efforts

NASA's DC-8 in the final stages of prep for its first flight in 2012. (NASA/Jeremy Harbeck)
NASA's DC-8 in the final stages of prep for its first flight in 2012. NASA/Jeremy Harbeck

NASA is in the middle of the Olympex project near Seattle, deploying a panoply of measurement instruments—radars, weather balloons, and even a DC-8 flying laboratory, to fly through the clouds—to gather precise data. Other gauges are collecting data from the ground and even imaging and counting individual raindrops and snowflakes to document as minutely as possible what different kinds of precipitation look like.

All of the data will be used to verify the rain and snowfall observations made by the Global Precipitation Measurement satellite system, and to test if the assumptions meteorologists make in interpreting those observations are actually correct.

With better calibrated data, meteorologists will be able to make better forecasts.

But don’t expect complaints about the weatherman to go away. The inherent unpredictability of Mother Nature probably won’t be conquered in the coming decades—if ever.

Numerical Weather Prediction

While NASA is working to collect more data, ironically, when Numerical Weather Prediction (NWP), the system presently used for day-to-day forecasting, was first conceived in the 1920s, the problem was not too little information, but too much.

NWP models weather systems by breaking an area into square grids, then tries to deduce how each grid would be influenced by the sum of the environmental variables of the adjacent grids— temperature, humidity, wind speeds, and the like . Those values are run through differential equations based on the laws of physics and fluid dynamics to produce forecasts of future weather conditions.

When Lewis Fry Richardson, the inventor of NWP, first tried applying the technique, it took him six weeks to make (terribly inaccurate) forecasts six hours into the future.

For NWP to be successfully implemented, Richardson imagined thousands of technicians, filling up a whole theater, manually performing calculations in sync, the entire group coordinated by a single man “like the conductor of an orchestra.” Richardson’s symphony of “slide rules and calculating machines” mercifully never became a reality, as the task was outsourced to IBM mainframes.

Elusive Omniscience

Although computational power grew through the decades, weather forecasting was still far from converging on omniscience, and in the 1960s the meteorologist Edward Lorenz formulated a theory for why it never will.

In 1961, while running NWP simulations, Lorenz rounded one variable from .506127 to .506, producing drastic, unexpected changes to the forecasts made. The lessons of this episode, that seemingly insignificant factors could play a decisive influence in the outcomes of large systems, laid the groundwork for chaos theory.

In 1972, Lorenz gave a talk titled “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” introducing “the butterfly effect” to the popular lexicon.

Chaos is an inherent property of the atmosphere, of nature itself, you can't get around it. That has implications for limits of predictability.
David Gold, senior forecaster, Weather Decision Technologies Inc.

“Chaos is an inherent property of the atmosphere, of nature itself, you can’t get around it. That has implications for limits of predictability,” said David Gold, a senior forecaster at Weather Decision Technologies Inc.

At present, Gold said that the best NWP forecasts can be reliable only one week ahead, after which the accuracy starts to break down as a result of the errors in the initial conditions inputted into the model.

“At a certain point in the future, the forecast isn’t going to be any good. It doesn’t matter what you do, nonlinearity has very serious consequences,” said Gold.

It’s possible that the NASA mission could offer marginal improvements to data collection efforts, Gold said, but don’t expect the moon. However, a “not unreasonable” hope is that in the coming years, day 8 predictions will be as reliable as day 5 predictions are today.

Combating Chaos

Aside from fine-tuning the precision of initial observations, meteorologists have sought to combat the chaotic side of weather systems with statistics.

Ensemble forecasting runs slight variations on the initial set of conditions to create a range of results, somewhat compensating the large errors in forecasting that can originate from a minuscule error in the initial conditions: a large variance in the spread of forecasts suggests that the prediction is highly uncertain, and vice versa.

A big picture perspective, in terms of both space and time, Gold said, also has much to offer forecasting. A much better reading of the structures of precipitation, such as rainfall patterns over the Indian Ocean, where often large errors are made, has the potential to produce “big gains” in forecasting because the errors degrade the forecast more slowly.

Finally, neural networks — cutting edge artificial intelligence that has recently made headlines for feats from writing your email replies to copying the painting styles of masters — could theoretically be fed historical data sets and learn how to correct for the biases in forecasting models, but its usefulness is conditional. The climate would have to remain stable, and we need to invent a time machine.

“Certain models have biases depending on the weather climate regime we’re in. … If we try to use the historical data in a data mining exercise, from the 1960s and ‘70s, the climate was very different back then, so having a long data record may not help you at all,” Gold said.

“A neural network based on historical records would need hundreds if not thousands of years of [quality] data,” he added.

Unfortunately, that’s data we just don’t have. So unless NASA can figure out how to crack chaos, we'll have to accept highly educated, carefully calculated best guesses.

Jonathan Zhou
Jonathan Zhou
Author
Jonathan Zhou is a tech reporter who has written about drones, artificial intelligence, and space exploration.
Related Topics