A recent report by Wired reveals that while many weather forecasting efforts are supported by complex computer algorithms, humans still do a good deal of the legwork. We are currently relying on the GOES-16 and -17 satellites (AKA, the latest geostationary satellites) as well as models from the Global Forecasting System (GFS) and the European Center for Medium-Range Weather Forecasts (ECMWF) to forecast the weather. with more precision than ever before. But despite decades of developing computerized automatic predictions, the human-powered (or at least human-enhanced) predictions aided by these technologies are more accessible and accurate than their AI counterparts.
As in many other verticals, a fully automated meteorological future faces many obstacles. Weather forecasts produced only by AI would require massive amounts of computing power, such as that provided by exascale computers, a category of supercomputers capable of processing 1018 calculations per second. Three exascale computers are currently in development in the United States, the first, the Aurora supercomputer from the Argonne National Laboratory, which is expected to go into service this year, but meteorology is not the only area of online research. to experience the power of Aurora. Accurate weather forecasting is also threatened by the inevitable full deployment of 5G, in which radio interference could negatively impact the ability of vital satellites to observe water vapor levels. Weather forecasting is based in part on monitoring the 23.8 GHz signals emitted by water vapor.
One solution to this problem is to deploy more equipment in the lower performing but longer range C-band. 24 GHz signals used for mmWave 5G offer higher performance but lower range, As discussed in this PCMag story from 2019. 5G signals in the 3 GHz – 7 GHz range will not interfere with future weather forecasts.
In the meantime, computer-generated forecasts lack the flavor needed to effectively prepare for disasters. While algorithmic models are generally more accurate and efficient than humans at predicting mild weather, humans more consistently produce accurate predictions of bad weather (the latter being arguably more important to doing well). Analysis of two decades of human predictions, GFS and the North American Mesoscale Forecast System (NAM) showed that humans beat the two most popular models in the world in the “bad weather” category 20 to 40% of the time. In other cases, humans have been able to add value to automated guidance, using the algorithm’s predictions as the basis for more detailed predictions.
None of this is to say that automated forecasts aren’t useful. Instead, today’s meteorology students learn to combat complacency by learning to defend their forecasts using real-time and historical data. “There is an old adage that ‘all models are wrong, some are useful,'” meteorologist Shawn Milrad, an instructor at Embry-Riddle Aeronautical College, told Wired. “Even though it’s a good forecast, it’s going to be slightly wrong. This is how you can add value to this model.