If there was a bright spot amid Hurricane Sandy’s massive devastation, including 148 deaths, at least $68 billion in damages, and the destruction of thousands of homes, it was the accuracy of the forecasts predicting where the storm would go.
Six days before Sandy came ashore one year ago this week—while the storm was still building in the Bahamas—forecasters predicted it would make landfall somewhere between New Jersey and New York City on October 29.
They were right.
Sandy, which had by then weakened from a Category 2 hurricane to an unusually potent Category 1, came ashore just south of Atlantic City, a few miles from where forecasters said it would, on the third to last day of October.
“They were really, really excellent forecasts,” said University of Miami meteorologist Brian McNoldy. “We knew a week ahead of time that something awful was going to happen around New York and New Jersey.”
That knowledge gave emergency management officials in the Northeast plenty of time to prepare, issuing evacuation orders for hundreds of thousands of residents in New Jersey and New York.
Even those who ignored the order used the forecasts to make preparations, boarding up buildings, stocking up on food and water, and buying gasoline-powered generators.
But there’s an important qualification about the excellent forecasts that anticipated Sandy’s course: The best came from a European hurricane prediction program.
The six-day-out landfall forecast arrived courtesy of a computer program known as the European Centre for Medium-range Weather Forecasting (ECMWF), which is based in England.
Most of the other models in use at the National Hurricane Center in Miami, including the U.S. Global Forecast System (GFS), didn’t start forecasting a U.S. landfall until four days before the storm came ashore. At the six-day-out mark, that model and others at the National Hurricane Center had Sandy veering away from the Atlantic Coast, staying far out at sea.
“The European model just outperformed the American model on Sandy,” says Kerry Emanuel, a meteorologist at Massachusetts Institute of Technology.
Now, U.S. weather forecasting programmers are working to close the gap between the U.S. Global Forecast System and the European model.
There’s more at stake than simple pride. “It’s to our advantage to have two excellent models instead of just one,” says McNoldy. “The more skilled models you have running, the more you know about the possibilities for a hurricane’s track.”
And, of course, the more lives you can save.
Data, Data, Data
The computer programs that meteorologists rely on to predict the courses of storms draw on lots of data.
U.S. forecasting computers and their European counterparts rely on radar that provides information on cloud formations and the rotation of a storm, on orbiting satellites that show precisely where a storm is, and on hurricane-hunter aircraft that fly into storms to collect wind speeds, barometric pressure readings, and water temperatures.
Hundreds of buoys deployed along the Atlantic and Gulf coasts, meanwhile, relay information about the heights of waves being produced by the storm.
All this data is fed into computers at the National Centers for Environmental Prediction at Camp Springs, Maryland, which use it to run the forecast models. Those computers, linked to others at the National Hurricane Center, translate the computer models into official forecasts.
The forecasters use data from all computer models—including the ECMWF—to make their forecasts four times daily.
Forecasts produced by various models often diverge, leaving plenty of room for interpretation by human forecasters.
“Usually, it’s kind of a subjective process as far as making a human forecast out of all the different computer runs,” says McNoldy. “The art is in the interpretation of all of the computer models’ outputs.”
There are two big reasons why the European model is usually more accurate than U.S. models. First, the European Centre for Medium-range Weather Forecasting model is a more sophisticated program that incorporates more data.
Second, the European computers that run the program are more powerful than their U.S. counterparts and are and able to do more calculations more quickly.
“They don’t have any top-secret things,” McNoldy said. “Because of their (computer) hardware, they can implement more sophisticated code.”
A consortium of European nations began developing the ECMWF in 1976, and the model has been fueled by a series of progressively more powerful supercomputers in England. It got a boost when the European Union was formed in 1993 and member states started contributing taxes for more improvements.
The ECMWF and the GFS are the two primary models that most forecasters look at, said Michael Laca, producer of TropMet, a website that focuses on hurricanes and other severe weather events.
Laca said that forecasts and other data from the ECMWF are provided to forecasters in the U.S. and elsewhere who pay for the information.
“The GFS, on the other hand, is freely available to everyone, and is funded—or defunded—solely through (U.S.) government appropriations,” Laca said.
And since funding for U.S. research and development is subject to funding debates in Congress, U.S. forecasters are “in a hard position to keep pace with the ECMWF from a research and hardware perspective,” Laca said.
Hurricane Sandy wasn’t the first or last hurricane for which the ECMWF was the most accurate forecast model. It has consistently outperformed the GFS and four other U.S. and Canadian forecasting models.
Greg Nordstrom, who teaches meteorology at Mississippi State University in Starkville, said the European model provided much more accurate forecasts for Hurricane Isaac in August 2012 and for Tropical Storm Karen earlier this year.
“This doesn’t mean the GFS doesn’t beat the Euro from time to time,” he says. “But, overall, the Euro is king of the global models.”
McNoldy says the European Union’s generous funding of research and development of their model has put it ahead of the American version. “Basically, it’s a matter of resources,” he says. “If we want to catch up, we will. It’s important that we have the best forecasting in the world.”
European developers who work on forecasting software have also benefited from better cooperation between government and academic researchers, says MIT’s Emanuel.
“If you talk to (the National Oceanic and Atmospheric Administration), they would deny that, but there’s no real spirit of cooperation (in the U.S.),” he says. “It’s a cultural problem that will not get fixed by throwing more money at the problem.”
Catching Up Amid Chaos
American computer models’ accuracy in forecasting hurricane tracks has improved dramatically since the 1970s. The average margin of error for a three-day forecast of a hurricane’s track has dropped from 500 miles in 1972 to 115 miles in 2012.
And NOAA is in the middle of a ten-year program intended to dramatically improve the forecasting of hurricanes’ tracks and their likelihood to intensify, or become stronger before landfall.
One of the project’s centerpieces is the Hurricane Weather Research and Forecasting model, or HWRF. In development since 2007, it’s similar to the ECMWF in that it will incorporate more data into its forecasting, including data from the GFS model.
Predicting the likelihood that a hurricane will intensify is difficult. For a hurricane to gain strength, it needs humid air, seawater heated to at least 80ºF, and no atmospheric winds to disrupt its circulation.
In 2005, Hurricane Wilma encountered those perfect conditions and in just 30 hours strengthened from a tropical storm with peak winds of about 70 miles per hour to the most powerful Atlantic hurricane on record, with winds exceeding 175 miles per hour.
But hurricanes are as delicate as they are powerful. Seemingly small environmental changes, like passing over water that’s slightly cooler than 80ºF or ingesting dryer air, can rapidly weaken a storm. And the environment is constantly changing.
“Over the next five years, there may be some big breakthrough to help improve intensification forecasting,” McNoldy said. “But we’re still working against the basic chaos in the atmosphere.”
He thinks it will take at least five to ten years for the U.S. to catch up with the European model.
MIT’s Emanuel says three factors will determine whether more accurate intensification forecasting is in the offing: the development of more powerful computers that can accommodate more data, a better understanding of hurricane intensity, and whether researchers reach a point at which no further improvements to intensification forecasting are possible.
Emanuel calls that point the “prediction horizon” and says it may have already been reached: “Our level of ignorance is still too high to know.”
Predictions and Responses
Assuming we’ve not yet hit that point, better predictions could dramatically improve our ability to weather hurricanes.
The more advance warning, the more time there is for those who do choose to heed evacuation orders. Earlier forecasting would also allow emergency management officials more time to provide transportation for poor, elderly, and disabled people unable to flee on their own.
More accurate forecasts would also reduce evacuation expenses.
Estimates of the cost of evacuating coastal areas before a hurricane vary considerably, but it’s been calculated that it costs $1 million for every mile of coastline evacuated. That includes the cost of lost commerce, wages and salaries by those who leave, and the costs of actual evacuating, like travel and shelter.
Better forecasts could reduce the size of evacuation areas and save money.
They would also allow officials to get a jump on hurricane response. The Federal Emergency Management Administration tries to stockpile relief supplies far enough away from an expected hurricane landfall to avoid damage from the storm but near enough so that the supplies can quickly be moved to affected areas afterwards.
More reliable landfall forecasts would help FEMA position recovery supplies closer to where they’ll be.
Whatever improvements are made, McNoldy warns that forecasting will never be foolproof. However dependable, he said, “Models will always be imperfect.”