Arquivo da tag: Incerteza

The Coronavirus Is Plotting a Comeback. Here’s Our Chance to Stop It for Good. (New York Times)

nytimes.com

Apoorva Mandavilli


Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic.
Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic. Credit: Lyndon French for The New York Times
Many scientists are expecting another rise in infections. But this time the surge will be blunted by vaccines and, hopefully, widespread caution. By summer, Americans may be looking at a return to normal life.

Published Feb. 25, 2021Updated Feb. 26, 2021, 12:07 a.m. ET

Across the United States, and the world, the coronavirus seems to be loosening its stranglehold. The deadly curve of cases, hospitalizations and deaths has yo-yoed before, but never has it plunged so steeply and so fast.

Is this it, then? Is this the beginning of the end? After a year of being pummeled by grim statistics and scolded for wanting human contact, many Americans feel a long-promised deliverance is at hand.

Americans will win against the virus and regain many aspects of their pre-pandemic lives, most scientists now believe. Of the 21 interviewed for this article, all were optimistic that the worst of the pandemic is past. This summer, they said, life may begin to seem normal again.

But — of course, there’s always a but — researchers are also worried that Americans, so close to the finish line, may once again underestimate the virus.

So far, the two vaccines authorized in the United States are spectacularly effective, and after a slow start, the vaccination rollout is picking up momentum. A third vaccine is likely to be authorized shortly, adding to the nation’s supply.

But it will be many weeks before vaccinations make a dent in the pandemic. And now the virus is shape-shifting faster than expected, evolving into variants that may partly sidestep the immune system.

The latest variant was discovered in New York City only this week, and another worrisome version is spreading at a rapid pace through California. Scientists say a contagious variant first discovered in Britain will become the dominant form of the virus in the United States by the end of March.

The road back to normalcy is potholed with unknowns: how well vaccines prevent further spread of the virus; whether emerging variants remain susceptible enough to the vaccines; and how quickly the world is immunized, so as to halt further evolution of the virus.

But the greatest ambiguity is human behavior. Can Americans desperate for normalcy keep wearing masks and distancing themselves from family and friends? How much longer can communities keep businesses, offices and schools closed?

Covid-19 deaths will most likely never rise quite as precipitously as in the past, and the worst may be behind us. But if Americans let down their guard too soon — many states are already lifting restrictions — and if the variants spread in the United States as they have elsewhere, another spike in cases may well arrive in the coming weeks.

Scientists call it the fourth wave. The new variants mean “we’re essentially facing a pandemic within a pandemic,” said Adam Kucharski, an epidemiologist at the London School of Hygiene and Tropical Medicine.

A patient received comfort in the I.C.U. of Marian Regional Medical Center in Santa Maria, Calif., last month. 
Credit: Daniel Dreifuss for The New York Times

The United States has now recorded 500,000 deaths amid the pandemic, a terrible milestone. As of Wednesday morning, at least 28.3 million people have been infected.

But the rate of new infections has tumbled by 35 percent over the past two weeks, according to a database maintained by The New York Times. Hospitalizations are down 31 percent, and deaths have fallen by 16 percent.

Yet the numbers are still at the horrific highs of November, scientists noted. At least 3,210 people died of Covid-19 on Wednesday alone. And there is no guarantee that these rates will continue to decrease.

“Very, very high case numbers are not a good thing, even if the trend is downward,” said Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health in Boston. “Taking the first hint of a downward trend as a reason to reopen is how you get to even higher numbers.”

In late November, for example, Gov. Gina Raimondo of Rhode Island limited social gatherings and some commercial activities in the state. Eight days later, cases began to decline. The trend reversed eight days after the state’s pause lifted on Dec. 20.

The virus’s latest retreat in Rhode Island and most other states, experts said, results from a combination of factors: growing numbers of people with immunity to the virus, either from having been infected or from vaccination; changes in behavior in response to the surges of a few weeks ago; and a dash of seasonality — the effect of temperature and humidity on the survival of the virus.

Parts of the country that experienced huge surges in infection, like Montana and Iowa, may be closer to herd immunity than other regions. But patchwork immunity alone cannot explain the declines throughout much of the world.

The vaccines were first rolled out to residents of nursing homes and to the elderly, who are at highest risk of severe illness and death. That may explain some of the current decline in hospitalizations and deaths.

A volunteer in the Johnson & Johnson vaccine trial received a shot in the Desmond Tutu H.I.V. Foundation Youth Center in Masiphumelele, South Africa, in December.
Credit: Joao Silva/The New York Times

But young people drive the spread of the virus, and most of them have not yet been inoculated. And the bulk of the world’s vaccine supply has been bought up by wealthy nations, which have amassed one billion more doses than needed to immunize their populations.

Vaccination cannot explain why cases are dropping even in countries where not a single soul has been immunized, like Honduras, Kazakhstan or Libya. The biggest contributor to the sharp decline in infections is something more mundane, scientists say: behavioral change.

Leaders in the United States and elsewhere stepped up community restrictions after the holiday peaks. But individual choices have also been important, said Lindsay Wiley, an expert in public health law and ethics at American University in Washington.

“People voluntarily change their behavior as they see their local hospital get hit hard, as they hear about outbreaks in their area,” she said. “If that’s the reason that things are improving, then that’s something that can reverse pretty quickly, too.”

The downward curve of infections with the original coronavirus disguises an exponential rise in infections with B.1.1.7, the variant first identified in Britain, according to many researchers.

“We really are seeing two epidemic curves,” said Ashleigh Tuite, an infectious disease modeler at the University of Toronto.

The B.1.1.7 variant is thought to be more contagious and more deadly, and it is expected to become the predominant form of the virus in the United States by late March. The number of cases with the variant in the United States has risen from 76 in 12 states as of Jan. 13 to more than 1,800 in 45 states now. Actual infections may be much higher because of inadequate surveillance efforts in the United States.

Buoyed by the shrinking rates over all, however, governors are lifting restrictions across the United States and are under enormous pressure to reopen completely. Should that occur, B.1.1.7 and the other variants are likely to explode.

“Everybody is tired, and everybody wants things to open up again,” Dr. Tuite said. “Bending to political pressure right now, when things are really headed in the right direction, is going to end up costing us in the long term.”

A fourth wave doesn’t have to be inevitable, scientists say, but the new variants will pose a significant challenge to averting that wave.
Credit: Lyndon French for The New York Times

Looking ahead to late March or April, the majority of scientists interviewed by The Times predicted a fourth wave of infections. But they stressed that it is not an inevitable surge, if government officials and individuals maintain precautions for a few more weeks.

A minority of experts were more sanguine, saying they expected powerful vaccines and an expanding rollout to stop the virus. And a few took the middle road.

“We’re at that crossroads, where it could go well or it could go badly,” said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

The vaccines have proved to be more effective than anyone could have hoped, so far preventing serious illness and death in nearly all recipients. At present, about 1.4 million Americans are vaccinated each day. More than 45 million Americans have received at least one dose.

A team of researchers at Fred Hutchinson Cancer Research Center in Seattle tried to calculate the number of vaccinations required per day to avoid a fourth wave. In a model completed before the variants surfaced, the scientists estimated that vaccinating just one million Americans a day would limit the magnitude of the fourth wave.

“But the new variants completely changed that,” said Dr. Joshua T. Schiffer, an infectious disease specialist who led the study. “It’s just very challenging scientifically — the ground is shifting very, very quickly.”

Natalie Dean, a biostatistician at the University of Florida, described herself as “a little more optimistic” than many other researchers. “We would be silly to undersell the vaccines,” she said, noting that they are effective against the fast-spreading B.1.1.7 variant.

But Dr. Dean worried about the forms of the virus detected in South Africa and Brazil that seem less vulnerable to the vaccines made by Pfizer and Moderna. (On Wednesday, Johnson & Johnson reported that its vaccine was relatively effective against the variant found in South Africa.)

Ccoronavirus test samples in a lab for genomic sequencing at Duke University in Durham, N.C., earlier this month.
Credit: Pete Kiehart for The New York Times

About 50 infections with those two variants have been identified in the United States, but that could change. Because of the variants, scientists do not know how many people who were infected and had recovered are now vulnerable to reinfection.

South Africa and Brazil have reported reinfections with the new variants among people who had recovered from infections with the original version of the virus.

“That makes it a lot harder to say, ‘If we were to get to this level of vaccinations, we’d probably be OK,’” said Sarah Cobey, an evolutionary biologist at the University of Chicago.

Yet the biggest unknown is human behavior, experts said. The sharp drop in cases now may lead to complacency about masks and distancing, and to a wholesale lifting of restrictions on indoor dining, sporting events and more. Or … not.

“The single biggest lesson I’ve learned during the pandemic is that epidemiological modeling struggles with prediction, because so much of it depends on human behavioral factors,” said Carl Bergstrom, a biologist at the University of Washington in Seattle.

Taking into account the counterbalancing rises in both vaccinations and variants, along with the high likelihood that people will stop taking precautions, a fourth wave is highly likely this spring, the majority of experts told The Times.

Kristian Andersen, a virologist at the Scripps Research Institute in San Diego, said he was confident that the number of cases will continue to decline, then plateau in about a month. After mid-March, the curve in new cases will swing upward again.

In early to mid-April, “we’re going to start seeing hospitalizations go up,” he said. “It’s just a question of how much.”

Hospitalizations and deaths will fall to levels low enough to reopen the country — though mask-wearing may remain necessary as a significant portion of people, including children, won’t be immunized.
Credit: Kendrick Brinson for The New York Times

Now the good news.

Despite the uncertainties, the experts predict that the last surge will subside in the United States sometime in the early summer. If the Biden administration can keep its promise to immunize every American adult by the end of the summer, the variants should be no match for the vaccines.

Combine vaccination with natural immunity and the human tendency to head outdoors as weather warms, and “it may not be exactly herd immunity, but maybe it’s sufficient to prevent any large outbreaks,” said Youyang Gu, an independent data scientist, who created some of the most prescient models of the pandemic.

Infections will continue to drop. More important, hospitalizations and deaths will fall to negligible levels — enough, hopefully, to reopen the country.

“Sometimes people lose vision of the fact that vaccines prevent hospitalization and death, which is really actually what most people care about,” said Stefan Baral, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health.

Even as the virus begins its swoon, people may still need to wear masks in public places and maintain social distance, because a significant percent of the population — including children — will not be immunized.

“Assuming that we keep a close eye on things in the summer and don’t go crazy, I think that we could look forward to a summer that is looking more normal, but hopefully in a way that is more carefully monitored than last summer,” said Emma Hodcroft, a molecular epidemiologist at the University of Bern in Switzerland.

Imagine: Groups of vaccinated people will be able to get together for barbecues and play dates, without fear of infecting one another. Beaches, parks and playgrounds will be full of mask-free people. Indoor dining will return, along with movie theaters, bowling alleys and shopping malls — although they may still require masks.

The virus will still be circulating, but the extent will depend in part on how well vaccines prevent not just illness and death, but also transmission. The data on whether vaccines stop the spread of the disease are encouraging, but immunization is unlikely to block transmission entirely.

Self-swab testing for Covid at Duke University in February.
Credit: Pete Kiehart for The New York Times

“It’s not zero and it’s not 100 — exactly where that number is will be important,” said Shweta Bansal, an infectious disease modeler at Georgetown University. “It needs to be pretty darn high for us to be able to get away with vaccinating anything below 100 percent of the population, so that’s definitely something we’re watching.”

Over the long term — say, a year from now, when all the adults and children in the United States who want a vaccine have received them — will this virus finally be behind us?

Every expert interviewed by The Times said no. Even after the vast majority of the American population has been immunized, the virus will continue to pop up in clusters, taking advantage of pockets of vulnerability. Years from now, the coronavirus may be an annoyance, circulating at low levels, causing modest colds.

Many scientists said their greatest worry post-pandemic was that new variants may turn out to be significantly less susceptible to the vaccines. Billions of people worldwide will remain unprotected, and each infection gives the virus new opportunities to mutate.

“We won’t have useless vaccines. We might have slightly less good vaccines than we have at the moment,” said Andrew Read, an evolutionary microbiologist at Penn State University. “That’s not the end of the world, because we have really good vaccines right now.”

For now, every one of us can help by continuing to be careful for just a few more months, until the curve permanently flattens.

“Just hang in there a little bit longer,” Dr. Tuite said. “There’s a lot of optimism and hope, but I think we need to be prepared for the fact that the next several months are likely to continue to be difficult.”

Credit: Lyndon French for The New York Times

Texas Power Grid Run by ERCOT Set Up the State for Disaster (New York Times)

nytimes.com

Clifford Krauss, Manny Fernandez, Ivan Penn, Rick Rojas – Feb 21, 2021


Texas has refused to join interstate electrical grids and railed against energy regulation. Now it’s having to answer to millions of residents who were left without power in last week’s snowstorm.

The cost of a free market electrical grid became painfully clear last week, as a snowstorm descended on Texas and millions of people ran out of power and water.
Credit: Nitashia Johnson for The New York Times

HOUSTON — Across the plains of West Texas, the pump jacks that resemble giant bobbing hammers define not just the landscape but the state itself: Texas has been built on the oil-and-gas business for the last 120 years, ever since the discovery of oil on Spindletop Hill near Beaumont in 1901.

Texas, the nation’s leading energy-producing state, seemed like the last place on Earth that could run out of energy.

Then last week, it did.

The crisis could be traced to that other defining Texas trait: independence, both from big government and from the rest of the country. The dominance of the energy industry and the “Republic of Texas” ethos became a devastating liability when energy stopped flowing to millions of Texans who shivered and struggled through a snowstorm that paralyzed much of the state.

Part of the responsibility for the near-collapse of the state’s electrical grid can be traced to the decision in 1999 to embark on the nation’s most extensive experiment in electrical deregulation, handing control of the state’s entire electricity delivery system to a market-based patchwork of private generators, transmission companies and energy retailers.

The energy industry wanted it. The people wanted it. Both parties supported it. “Competition in the electric industry will benefit Texans by reducing monthly rates and offering consumers more choices about the power they use,” George W. Bush, then the governor, said as he signed the top-to-bottom deregulation legislation.

Mr. Bush’s prediction of lower-cost power generally came true, and the dream of a free-market electrical grid worked reasonably well most of the time, in large part because Texas had so much cheap natural gas as well as abundant wind to power renewable energy. But the newly deregulated system came with few safeguards and even fewer enforced rules.

With so many cost-conscious utilities competing for budget-shopping consumers, there was little financial incentive to invest in weather protection and maintenance. Wind turbines are not equipped with the de-icing equipment routinely installed in the colder climes of the Dakotas and power lines have little insulation. The possibility of more frequent cold-weather events was never built into infrastructure plans in a state where climate change remains an exotic, disputed concept.

“Deregulation was something akin to abolishing the speed limit on an interstate highway,” said Ed Hirs, an energy fellow at the University of Houston. “That opens up shortcuts that cause disasters.”

The state’s entire energy infrastructure was walloped with glacial temperatures that even under the strongest of regulations might have frozen gas wells and downed power lines.

But what went wrong was far broader: Deregulation meant that critical rules of the road for power were set not by law, but rather by a dizzying array of energy competitors.

Utility regulation is intended to compensate for the natural monopolies that occur when a single electrical provider serves an area; it keeps prices down while protecting public safety and guaranteeing fair treatment to customers. Yet many states have flirted with deregulation as a way of giving consumers more choices and encouraging new providers, especially alternative energy producers.

California, one of the early deregulators in the 1990s, scaled back its initial foray after market manipulation led to skyrocketing prices and rolling blackouts.

States like Maryland allow customers to pick from a menu of producers. In some states, competing private companies offer varied packages like discounts for cheaper power at night. But no state has gone as far as Texas, which has not only turned over the keys to the free market but has also isolated itself from the national grid, limiting the state’s ability to import power when its own generators are foundering.

Consumers themselves got a direct shock last week when customers who had chosen variable-rate electricity contracts found themselves with power bills of $5,000 or more. While they were expecting extra-low monthly rates, many may now face huge bills as a result of the upswing in wholesale electricity prices during the cold wave. Gov. Greg Abbott on Sunday said the state’s Public Utility Commission has issued a moratorium on customer disconnections for non-payment and will temporarily restrict providers from issuing invoices.

A family in Austin, Texas, kept warm by a fire outside their apartment on Wednesday. They lost power early Monday morning.
Credit: Tamir Kalifa for The New York Times

There is regulation in the Texas system, but it is hardly robust. One nonprofit agency, the Electric Reliability Council of Texas, or ERCOT, was formed to manage the wholesale market. It is supervised by the Public Utility Commission, which also oversees the transmission companies that offer customers an exhaustive array of contract choices laced with more fine print than a credit card agreement.

But both agencies are nearly unaccountable and toothless compared to regulators in other regions, where many utilities have stronger consumer protections and submit an annual planning report to ensure adequate electricity supply. Texas energy companies are given wide latitude in their planning for catastrophic events.

One example of how Texas has gone it alone is its refusal to enforce a “reserve margin” of extra power available above expected demand, unlike all other power systems around North America. With no mandate, there is little incentive to invest in precautions for events, such as a Southern snowstorm, that are rare. Any company that took such precautions would put itself at a competitive disadvantage.

A surplus supply of natural gas, the dominant power fuel in Texas, near power plants might have helped avoid the cascade of failures in which power went off, forcing natural gas production and transmission offline, which in turn led to further power shortages.

In the aftermath of the dayslong outages, ERCOT has been criticized by both Democratic and Republican residents, lawmakers and business executives, a rare display of unity in a fiercely partisan and Republican-dominated state. Mr. Abbott said he supported calls for the agency’s leadership to resign and made ERCOT reform a priority for the Legislature. The reckoning has been swift — this week, lawmakers will hold hearings in Austin to investigate the agency’s handling of the storm and the rolling outages.

For ERCOT operators, the storm’s arrival was swift and fierce, but they had anticipated it and knew it would strain their system. They asked power customers across the state to conserve, warning that outages were likely.

But late on Sunday, Feb. 14, it rapidly became clear that the storm was far worse than they had expected: Sleet and snow fell, and temperatures plunged. In the council’s command center outside Austin, a room dominated by screens flashing with maps, graphics and data tracking the flow of electricity to 26 million people in Texas, workers quickly found themselves fending off a crisis. As weather worsened into Monday morning, residents cranked up their heaters and demand surged.

Power plants began falling offline in rapid succession as they were overcome by the frigid weather or ran out of fuel to burn. Within hours, 40 percent of the power supply had been lost.

The entire grid — carrying 90 percent of the electric load in Texas — was barreling toward a collapse.

Much of Austin lost power last week due to rolling blackouts.
Credit: Tamir Kalifa for The New York Times

In the electricity business, supply and demand need to be in balance. Imbalances lead to catastrophic blackouts. Recovering from a total blackout would be an agonizing and tedious process, known as a “black start,” that could take weeks, or possibly months.

And in the early-morning hours last Monday, the Texas grid was “seconds and minutes” away from such a collapse, said Bill Magness, the president and chief executive of the Electric Reliability Council.

“If we had allowed a catastrophic blackout to happen, we wouldn’t be talking today about hopefully getting most customers their power back,” Mr. Magness said. “We’d be talking about how many months it might be before you get your power back.”

The outages and the cold weather touched off an avalanche of failures, but there had been warnings long before last week’s storm.

After a heavy snowstorm in February 2011 caused statewide rolling blackouts and left millions of Texans in the dark, federal authorities warned the state that its power infrastructure had inadequate “winterization” protection. But 10 years later, pipelines remained inadequately insulated and heaters that might have kept instruments from freezing were never installed.

During heat waves, when demand has soared during several recent summers, the system in Texas has also strained to keep up, raising questions about lack of reserve capacity on the unregulated grid.

And aside from the weather, there have been periodic signs that the system can run into trouble delivering sufficient energy, in some cases because of equipment failures, in others because of what critics called an attempt to drive up prices, according to Mr. Hirs of the University of Houston, as well as several energy consultants.

Another potential safeguard might have been far stronger connections to the two interstate power-sharing networks, East and West, that allow states to link their electrical grids and obtain power from thousands of miles away when needed to hold down costs and offset their own shortfalls.

But Texas, reluctant to submit to the federal regulation that is part of the regional power grids, made decisions as far back as the early 20th century to become the only state in the continental United States to operate its own grid — a plan that leaves it able to borrow only from a few close neighbors.

The border city of El Paso survived the freeze much better than Dallas or Houston because it was not part of the Texas grid but connected to the much larger grid covering many Western states.

But the problems that began with last Monday’s storm went beyond an isolated electrical grid. The entire ecosystem of how Texas generates, transmits and uses power stalled, as millions of Texans shivered in darkened, unheated homes.

A surplus supply of natural gas, the dominant power fuel in Texas, near power plants might have helped avoid the cascade of failures.
Credit: Eddie Seal/Bloomberg

Texans love to brag about natural gas, which state officials often call the cleanest-burning fossil fuel. No state produces more, and gas-fired power plants produce nearly half the state’s electricity.

“We are struggling to come to grips with the reality that gas came up short and let us down when we needed it most,” said Michael E. Webber, a professor of mechanical engineering at the University of Texas at Austin.

The cold was so severe that the enormous oil and natural gas fields of West Texas froze up, or could not get sufficient power to operate. Though a few plants had stored gas reserves, there was insufficient electricity to pump it.

The leaders of ERCOT defended the organization, its lack of mandated reserves and the state’s isolation from larger regional grids, and said the blame for the power crisis lies with the weather, not the overall deregulated system in Texas.

“The historic, just about unprecedented, storm was the heart of the problem,” Mr. Magness, the council’s chief executive, said, adding: “We’ve found that this market structure works. It demands reliability. I don’t think there’s a silver-bullet market structure that could have managed the extreme lows and generation outages that we were facing Sunday night.”

In Texas, energy regulation is as much a matter of philosophy as policy. Its independent power grid is a point of pride that has been an applause line in Texas political speeches for decades.

Deregulation is a hot topic among Texas energy experts, and there has been no shortage of predictions that the grid could fail under stress. But there has not been widespread public dissatisfaction with the system, although many are now wondering if they are being well served.

“I believe there is great value in Texas being on its own grid and I believe we can do so safely and securely and confidently going forward,” said State Representative Jeff Leach, a Republican from Plano who has called for an investigation into what went wrong. “But it’s going to take new investment and some new strategic decisions to make sure we’re protected from this ever happening again.”

Steven D. Wolens, a former Democratic lawmaker from Dallas and a principal architect of the 1999 deregulation legislation, said deregulation was meant to spur more generation, including from renewable energy sources, and to encourage the mothballing of older plants that were spewing pollution. “We were successful,” said Mr. Wolens, who left the Legislature in 2005.

But the 1999 legislation was intended as a first iteration that would evolve along with the needs of the state, he said. “They can focus on it now and they can fix it now,” he said. “The buck stops with the Texas Legislature and they are in a perfect position to determine the basis of the failure, to correct it and make sure it never happens again.”

Clifford Krauss reported from Houston, Manny Fernandez and Ivan Penn from Los Angeles, and Rick Rojas from Nashville. David Montgomery contributed reporting from Austin, Texas.

Texas Blackouts Point to Coast-to-Coast Crises Waiting to Happen (New York Times)

nytimes.com

Christopher Flavelle, Brad Plumer, Hiroko Tabuchi – Feb 20, 2021


Traffic at a standstill on Interstate 35 in Kileen, Texas, on Thursday.
Traffic at a standstill on Interstate 35 in Kileen, Texas, on Thursday. Credit: Joe Raedle/Getty Images
Continent-spanning storms triggered blackouts in Oklahoma and Mississippi, halted one-third of U.S. oil production and disrupted vaccinations in 20 states.

Even as Texas struggled to restore electricity and water over the past week, signs of the risks posed by increasingly extreme weather to America’s aging infrastructure were cropping up across the country.

The week’s continent-spanning winter storms triggered blackouts in Texas, Oklahoma, Mississippi and several other states. One-third of oil production in the nation was halted. Drinking-water systems in Ohio were knocked offline. Road networks nationwide were paralyzed and vaccination efforts in 20 states were disrupted.

The crisis carries a profound warning. As climate change brings more frequent and intense storms, floods, heat waves, wildfires and other extreme events, it is placing growing stress on the foundations of the country’s economy: Its network of roads and railways, drinking-water systems, power plants, electrical grids, industrial waste sites and even homes. Failures in just one sector can set off a domino effect of breakdowns in hard-to-predict ways.

Much of this infrastructure was built decades ago, under the expectation that the environment around it would remain stable, or at least fluctuate within predictable bounds. Now climate change is upending that assumption.

“We are colliding with a future of extremes,” said Alice Hill, who oversaw planning for climate risks on the National Security Council during the Obama administration. “We base all our choices about risk management on what’s occurred in the past, and that is no longer a safe guide.”

While it’s not always possible to say precisely how global warming influenced any one particular storm, scientists said, an overall rise in extreme weather creates sweeping new risks.

Sewer systems are overflowing more often as powerful rainstorms exceed their design capacity. Coastal homes and highways are collapsing as intensified runoff erodes cliffs. Coal ash, the toxic residue produced by coal-burning plants, is spilling into rivers as floods overwhelm barriers meant to hold it back. Homes once beyond the reach of wildfires are burning in blazes they were never designed to withstand.

A broken water main in McComb., Miss. on Thursday.
Credit: Matt Williamson/The Enterprise-Journal, via Associated Press

Problems like these often reflect an inclination of governments to spend as little money as possible, said Shalini Vajjhala, a former Obama administration official who now advises cities on meeting climate threats. She said it’s hard to persuade taxpayers to spend extra money to guard against disasters that seem unlikely.

But climate change flips that logic, making inaction far costlier. “The argument I would make is, we can’t afford not to, because we’re absorbing the costs” later, Ms. Vajjhala said, after disasters strike. “We’re spending poorly.”

The Biden administration has talked extensively about climate change, particularly the need to reduce greenhouse gas emissions and create jobs in renewable energy. But it has spent less time discussing how to manage the growing effects of climate change, facing criticism from experts for not appointing more people who focus on climate resilience.

“I am extremely concerned by the lack of emergency-management expertise reflected in Biden’s climate team,” said Samantha Montano, an assistant professor at the Massachusetts Maritime Academy who focuses on disaster policy. “There’s an urgency here that still is not being reflected.”

A White House spokesman, Vedant Patel, said in a statement, “Building resilient and sustainable infrastructure that can withstand extreme weather and a changing climate will play an integral role in creating millions of good paying, union jobs” while cutting greenhouse gas emissions.

And while President Biden has called for a major push to refurbish and upgrade the nation’s infrastructure, getting a closely divided Congress to spend hundreds of billions, if not trillions of dollars, will be a major challenge.

Heightening the cost to society, disruptions can disproportionately affect lower-income households and other vulnerable groups, including older people or those with limited English.

“All these issues are converging,” said Robert D. Bullard, a professor at Texas Southern University who studies wealth and racial disparities related to the environment. “And there’s simply no place in this country that’s not going to have to deal with climate change.”

Flooding around Edenville Township, Mich., last year swept away a bridge over the Tittabawassee River.
Credit: Matthew Hatcher/Getty Images

In September, when a sudden storm dumped a record of more than two inches of water on Washington in less than 75 minutes, the result wasn’t just widespread flooding, but also raw sewage rushing into hundreds of homes.

Washington, like many other cities in the Northeast and Midwest, relies on what’s called a combined sewer overflow system: If a downpour overwhelms storm drains along the street, they are built to overflow into the pipes that carry raw sewage. But if there’s too much pressure, sewage can be pushed backward, into people’s homes — where the forces can send it erupting from toilets and shower drains.

This is what happened in Washington. The city’s system was built in the late 1800s. Now, climate change is straining an already outdated design.

DC Water, the local utility, is spending billions of dollars so that the system can hold more sewage. “We’re sort of in uncharted territory,” said Vincent Morris, a utility spokesman.

The challenge of managing and taming the nation’s water supplies — whether in streets and homes, or in vast rivers and watersheds — is growing increasingly complex as storms intensify. Last May, rain-swollen flooding breached two dams in Central Michigan, forcing thousands of residents to flee their homes and threatening a chemical complex and toxic waste cleanup site. Experts warned it was unlikely to be the last such failure.

Many of the country’s 90,000 dams were built decades ago and were already in dire need of repairs. Now climate change poses an additional threat, bringing heavier downpours to parts of the country and raising the odds that some dams could be overwhelmed by more water than they were designed to handle. One recent study found that most of California’s biggest dams were at increased risk of failure as global warming advances.

In recent years, dam-safety officials have begun grappling with the dangers. Colorado, for instance, now requires dam builders to take into account the risk of increased atmospheric moisture driven by climate change as they plan for worst-case flooding scenarios.

But nationwide, there remains a backlog of thousands of older dams that still need to be rehabilitated or upgraded. The price tag could ultimately stretch to more than $70 billion.

“Whenever we study dam failures, we often find there was a lot of complacency beforehand,” said Bill McCormick, president of the Association of State Dam Safety Officials. But given that failures can have catastrophic consequences, “we really can’t afford to be complacent.”

Crews repaired switches on utility poles damaged by the storms in Texas.
Credit: Tamir Kalifa for The New York Times

If the Texas blackouts exposed one state’s poor planning, they also provide a warning for the nation: Climate change threatens virtually every aspect of electricity grids that aren’t always designed to handle increasingly severe weather. The vulnerabilities show up in power lines, natural-gas plants, nuclear reactors and myriad other systems.

Higher storm surges can knock out coastal power infrastructure. Deeper droughts can reduce water supplies for hydroelectric dams. Severe heat waves can reduce the efficiency of fossil-fuel generators, transmission lines and even solar panels at precisely the moment that demand soars because everyone cranks up their air-conditioners.

Climate hazards can also combine in new and unforeseen ways.

In California recently, Pacific Gas & Electric has had to shut off electricity to thousands of people during exceptionally dangerous fire seasons. The reason: Downed power lines can spark huge wildfires in dry vegetation. Then, during a record-hot August last year, several of the state’s natural gas plants malfunctioned in the heat, just as demand was spiking, contributing to blackouts.

“We have to get better at understanding these compound impacts,” said Michael Craig, an expert in energy systems at the University of Michigan who recently led a study looking at how rising summer temperatures in Texas could strain the grid in unexpected ways. “It’s an incredibly complex problem to plan for.”

Some utilities are taking notice. After Superstorm Sandy in 2012 knocked out power for 8.7 million customers, utilities in New York and New Jersey invested billions in flood walls, submersible equipment and other technology to reduce the risk of failures. Last month, New York’s Con Edison said it would incorporate climate projections into its planning.

As freezing temperatures struck Texas, a glitch at one of two reactors at a South Texas nuclear plant, which serves 2 million homes, triggered a shutdown. The cause: Sensing lines connected to the plant’s water pumps had frozen, said Victor Dricks, a spokesman for the federal Nuclear Regulatory Agency.

It’s also common for extreme heat to disrupt nuclear power. The issue is that the water used to cool reactors can become too warm to use, forcing shutdowns.

Flooding is another risk.

After a tsunami led to several meltdowns at Japan’s Fukushima Daiichi power plant in 2011, the U.S. Nuclear Regulatory Commission told the 60 or so working nuclear plants in the United States, many decades old, to evaluate their flood risk to account for climate change. Ninety percent showed at least one type of flood risk that exceeded what the plant was designed to handle.

The greatest risk came from heavy rain and snowfall exceeding the design parameters at 53 plants.

Scott Burnell, an Nuclear Regulatory Commission spokesman, said in a statement, “The NRC continues to conclude, based on the staff’s review of detailed analyses, that all U.S. nuclear power plants can appropriately deal with potential flooding events, including the effects of climate change, and remain safe.”

A section of Highway 1 along the California coastline collapsed in January amid heavy rains.
Credit: Josh Edelson/Agence France-Presse — Getty Images

The collapse of a portion of California’s Highway 1 into the Pacific Ocean after heavy rains last month was a reminder of the fragility of the nation’s roads.

Several climate-related risks appeared to have converged to heighten the danger. Rising seas and higher storm surges have intensified coastal erosion, while more extreme bouts of precipitation have increased the landslide risk.

Add to that the effects of devastating wildfires, which can damage the vegetation holding hillside soil in place, and “things that wouldn’t have slid without the wildfires, start sliding,” said Jennifer M. Jacobs, a professor of civil and environmental engineering at the University of New Hampshire. “I think we’re going to see more of that.”

The United States depends on highways, railroads and bridges as economic arteries for commerce, travel and simply getting to work. But many of the country’s most important links face mounting climate threats. More than 60,000 miles of roads and bridges in coastal floodplains are already vulnerable to extreme storms and hurricanes, government estimates show. And inland flooding could also threaten at least 2,500 bridges across the country by 2050, a federal climate report warned in 2018.

Sometimes even small changes can trigger catastrophic failures. Engineers modeling the collapse of bridges over Escambia Bay in Florida during Hurricane Ivan in 2004 found that the extra three inches of sea-level rise since the bridge was built in 1968 very likely contributed to the collapse, because of the added height of the storm surge and force of the waves.

“A lot of our infrastructure systems have a tipping point. And when you hit the tipping point, that’s when a failure occurs,” Dr. Jacobs said. “And the tipping point could be an inch.”

Crucial rail networks are at risk, too. In 2017, Amtrak consultants found that along parts of the Northeast corridor, which runs from Boston to Washington and carries 12 million people a year, flooding and storm surge could erode the track bed, disable the signals and eventually put the tracks underwater.

And there is no easy fix. Elevating the tracks would require also raising bridges, electrical wires and lots of other infrastructure, and moving them would mean buying new land in a densely packed part of the country. So the report recommended flood barriers, costing $24 million per mile, that must be moved into place whenever floods threaten.

A worker checked efforts to prevent coal ash from escaping into the Waccamaw River in South Carolina after Hurricane Florence in 2018.
Credit: Randall Hill/Reuters

A series of explosions at a flood-damaged chemical plant outside Houston after Hurricane Harvey in 2017 highlighted a danger lurking in a world beset by increasingly extreme weather.

The blasts at the plant came after flooding knocked out the site’s electrical supply, shutting down refrigeration systems that kept volatile chemicals stable. Almost two dozen people, many of them emergency workers, were treated for exposure to the toxic fumes, and some 200 nearby residents were evacuated from their homes.

More than 2,500 facilities that handle toxic chemicals lie in federal flood-prone areas across the country, about 1,400 of them in areas at the highest risk of flooding, a New York Times analysis showed in 2018.

Leaks from toxic cleanup sites, left behind by past industry, pose another threat.

Almost two-thirds of some 1,500 superfund cleanup sites across the country are in areas with an elevated risk of flooding, storm surge, wildfires or sea level rise, a government audit warned in 2019. Coal ash, a toxic substance produced by coal power plants that is often stored as sludge in special ponds, have been particularly exposed. After Hurricane Florence in 2018, for example, a dam breach at the site of a power plant in Wilmington, N.C., released the hazardous ash into a nearby river.

“We should be evaluating whether these facilities or sites actually have to be moved or re-secured,” said Lisa Evans, senior counsel at Earthjustice, an environmental law organization. Places that “may have been OK in 1990,” she said, “may be a disaster waiting to happen in 2021.”

East Austin, Texas, during a blackout on Wednesday.  
Credit: Bronte Wittpenn/Austin American-Statesman, via Associated Press

Texas’s Power Crisis Has Turned Into a Disaster That Parallels Hurricane Katrina (TruthOut)

truthout.org

Sharon Zhang, Feb. 18, 2021


Propane tanks are placed in a line as people wait for the power to turn on to fill their tanks in Houston, Texas on February 17, 2021.
Propane tanks are placed in a line as people wait for the power to turn on to fill their tanks in Houston, Texas, on February 17, 2021. Mark Felix for The Washington Post via Getty Images

As many in Texas wake up still without power on Thursday morning, millions are now also having to contend with water shutdowns, boil advisories, and empty grocery shelves as cities struggle with keeping infrastructure powered and supply chains are interrupted.

As of estimates performed on Wednesday, 7 million Texans were under a boil advisory. Since then, Austin has also issued a citywide water-boil notice due to power loss at their biggest water treatment plant. Austin Water serves over a million customers, according to its website.

With hundreds of thousands of people still without power in the state, some contending that they have no water coming out of their faucets at all, and others facing burst pipes leading to collapsed ceilings and other damage to their homes, the situation is dire for many Texans facing multiple problems at once.

Even as some residents are getting their power restored, the problems are only continuing to layer as the only grocery stores left open were quickly selling out of food and supplies. As many without power watched their refrigerated food spoil, lines to get into stores wrapped around blocks and buildings and store shelves sat completely empty with no indication of when new shipments would be coming in. Food banks have had to cancel deliveries and schools to halt meal distribution to students, the Texas Tribune reports.

People experiencing homelessness, including a disproportionate number of Black residents, have especially suffered in the record cold temperatures across the state. There have been some reports of people being found dead in the streets because of a lack of shelter.

“Businesses are shut down. Streets are empty, other than a few guys sliding around in 4x4s and fire trucks rushing to rescue people who turn their ovens on to keep warm and poison themselves with carbon monoxide,” wrote Austin resident Jeff Goodell in Rolling Stone. “Yesterday, the line at our neighborhood grocery store was three blocks long. People wandering around with handguns on their hip adds to a sense of lawlessness (Texas is an open-carry state).”

The Texas agricultural commissioner has said that farmers and ranchers are having to throw away millions of dollars worth of goods because of a lack of power. “We’re looking at a food supply chain problem like we’ve never seen before, even with COVID-19,” he told one local news affiliate.

An energy analyst likened the power crisis to the fallout of Hurricane Katrina as it’s becoming increasingly clear that the situation in Texas is a statewide disaster.

As natural gas output declined dramatically in the state, Paul Sankey, who leads energy analyst firm Sankey Research, said on Bloomberg, “This situation to me is very reminiscent of Hurricane Katrina…. We have never seen a loss [of energy supply] at this scale” in mid-winter. This is “the biggest outage in the history [of] U.S. oil and gas,” Sankey said.

Many others online echoed Sankey’s words as “Katrina” trended on Twitter, saying that the situation is similar to the hurricane disaster in that it has been downplayed by politicians but may be uncovered to be even more serious in the coming weeks.

Experts say that the power outages have partially been caused by the deregulation of the state’s electric grid. The government, some say, favored deregulatory actions like not requiring electrical equipment upgrades or proper weatherization, instead relying on free market mechanisms that ultimately contributed to the current disaster.

Former Gov. Rick Perry faced criticism on Wednesday when he said that Texans would rather face the current disaster than have to be regulated by the federal government. And he’s not the only Republican currently catching heat — many have begun calling for the resignation of Gov. Greg Abbott for a failure of leadership. On Wednesday, as millions suffered without power and under boil-water advisories, the governor went on Fox to attack clean energy, which experts say was not a major contributor to the current crisis, and the Green New Deal.

After declaring a state of emergency in the state over the weekend, the Joe Biden administration announced on Wednesday that it would be sending generators and other supplies to the state.

The freeze in Texas exposes America’s infrastructural failings (The Economist)

economist.com

Feb 17th 2021

You ain’t foolin’ nobody with the lights out

WHEN IT RAINS, it pours, and when it snows, the lights turn off. Or so it goes in Texas. After a winter storm pummelled the Lone Star State with record snowfall and the lowest temperatures in more than 30 years, millions were left without electricity and heat. On February 16th 4.5m Texan households were cut off from power, as providers were overloaded with demand and tried to shuffle access to electricity so the whole grid did not go down.

Whole skylines, including Dallas’s, went dark to conserve power. Some Texans braved the snowy roads to check into the few hotels with remaining rooms, only for the hotels’ power to go off as they arrived. Others donned skiwear and remained inside, hoping the lights and heat would come back on. Across the state, what were supposed to be “rolling” blackouts lasted for days. It is still too soon to quantify the devastation. More than 20 people have died in motor accidents, from fires lit for warmth and from carbon-monoxide poisoning from using cars for heat. The storm has also halted deliveries of covid-19 vaccines and may prevent around 1m vaccinations from happening this week. Several retail electricity providers are likely to go bankrupt, after being hit with surging wholesale power prices.

Other states, including Tennessee, were also covered in snow, but Texas got the lion’s share and ground to a halt. Texans are rightly furious that residents of America’s energy capital cannot count on reliable power. Everyone is asking why.

The short answer is that the Electric Reliability Council of Texas (ERCOT), which operates the grid, did not properly forecast the demand for energy as a result of the storm. Some say that this was nearly impossible to predict, but there were warnings of the severity of the coming weather in the preceding week, and ERCOT’s projections were notably short. Brownouts last summer had already demonstrated the grid’s lack of excess capacity, says George O’Leary of Tudor, Pickering, Holt & CO (TPH), an energy investment bank.

Many Republican politicians were quick to blame renewable energy sources, such as wind power, for the blackouts, but that is not fair. Some wind turbines did indeed freeze, but natural gas, which accounts for around half of the state’s electricity generation, was the primary source of the shortfall. Plants broke down, as did the gas supply chain and pipelines. The cold also caused a reactor at one of the state’s two nuclear plants to go offline. Transmission lines may have also iced up, says Wade Schauer of Wood Mackenzie, an energy-research firm. In short, Texas experienced a perfect storm.

Some of the blame falls on the unique design of the electricity market in Texas. Of America’s 48 contiguous states, it is the only one with its own stand-alone electricity grid—the Texas Interconnection. This means that when power generators fail, the state cannot import electricity from outside its borders.

The state’s deregulated power market is also fiercely competitive. ERCOT oversees the grid, while power generators produce electricity for the wholesale market. Some 300 retail electricity providers buy that fuel and then compete for consumers. Because such cold weather is rare, energy companies do not invest in “winterising” their equipment, as this would raise their prices for consumers. Perhaps most important, the state does not have a “capacity market”, which would ensure that there was extra power available for surging demand. This acts as a sort of insurance policy so the lights will not go out, but it also means customers pay higher bills.

For years the benefits of Texas’s deregulated market structure were clear. At 8.6 cents per kilowatt hour, the state’s average retail price for electricity is around one-fifth lower than the national average and about half the cost of California’s. In 1999 the state set targets for renewables, and today it accounts for around 30% of America’s wind energy.

This disaster is prompting people to question whether Texas’s system is as resilient and well-designed as people previously believed. Greg Abbott, the governor, has called for an investigation into ERCOT. This storm “has exposed some serious weaknesses in our free-market approach in Texas”, says Luke Metzger of Environment Texas, a non-profit, who had been without power for 54 hours when The Economist went to press.

Wholly redesigning the power grid in Texas seems unlikely. After the snow melts, the state will need to tackle two more straightforward questions. The first is whether it needs to increase reserve capacity. “If we impose a capacity market here and a bunch of new cap-ex is required to winterise equipment, who bears that cost? Ultimately it’s the customer,” says Bobby Tudor, chairman of TPH. The second is how Texas can ensure the reliability of equipment in extreme weather conditions. After a polar vortex in 2014 hit the east coast, PJM, a regional transmission organisation, started making higher payments based on reliability of service, says Michael Weinstein of Credit Suisse, a bank. In Texas there is no penalty for systems going down, except for public complaints and politicians’ finger-pointing.

Texas is hardly the only state to struggle with blackouts. California, which has a more tightly regulated power market, is regularly plunged into darkness during periods of high heat, winds and wildfires. Unlike Texas, much of northern California is dependent on a single utility, PG&E. The company has been repeatedly sued for dismal, dangerous management. But, as in Texas, critics have blamed intermittent renewable power for blackouts. In truth, California’s blackouts share many of the same causes as those in Texas: extreme weather, power generators that failed unexpectedly, poor planning by state regulators and an inability (in California, temporary) to import power from elsewhere. In California’s blackouts last year, solar output naturally declined in the evening. But gas plants also went offline and weak rainfall lowered the output of hydroelectric dams.

In California, as in Texas, it would help to have additional power generation, energy storage to meet peak demand and more resilient infrastructure, such as buried power lines and more long-distance, high-voltage transmission. Weather events that once might have been dismissed as unusual are becoming more common. Without more investment in electricity grids, blackouts will be, too.

A Glimpse of America’s Future: Climate Change Means Trouble for Power Grids (New York Times)

nytimes.com

Brad Plumer, Feb. 17, 2021


Systems are designed to handle spikes in demand, but the wild and unpredictable weather linked to global warming will very likely push grids beyond their limits.
A street in Austin, Texas, without power on Monday evening.
Credit: Tamir Kalifa for The New York Times

Published Feb. 16, 2021Updated Feb. 17, 2021, 6:59 a.m. ET

Huge winter storms plunged large parts of the central and southern United States into an energy crisis this week, with frigid blasts of Arctic weather crippling electric grids and leaving millions of Americans without power amid dangerously cold temperatures.

The grid failures were most severe in Texas, where more than four million people woke up Tuesday morning to rolling blackouts. Separate regional grids in the Southwest and Midwest also faced serious strain. As of Tuesday afternoon, at least 23 people nationwide had died in the storm or its aftermath.

Analysts have begun to identify key factors behind the grid failures in Texas. Record-breaking cold weather spurred residents to crank up their electric heaters and pushed power demand beyond the worst-case scenarios that grid operators had planned for. At the same time, a large fraction of the state’s gas-fired power plants were knocked offline amid icy conditions, with some plants suffering fuel shortages as natural gas demand spiked. Many of Texas’ wind turbines also froze and stopped working.

The crisis sounded an alarm for power systems throughout the country. Electric grids can be engineered to handle a wide range of severe conditions — as long as grid operators can reliably predict the dangers ahead. But as climate change accelerates, many electric grids will face extreme weather events that go far beyond the historical conditions those systems were designed for, putting them at risk of catastrophic failure.

While scientists are still analyzing what role human-caused climate change may have played in this week’s winter storms, it is clear that global warming poses a barrage of additional threats to power systems nationwide, including fiercer heat waves and water shortages.

Measures that could help make electric grids more robust — such as fortifying power plants against extreme weather, or installing more backup power sources — could prove expensive. But as Texas shows, blackouts can be extremely costly, too. And, experts said, unless grid planners start planning for increasingly wild and unpredictable climate conditions, grid failures will happen again and again.

“It’s essentially a question of how much insurance you want to buy,” said Jesse Jenkins, an energy systems engineer at Princeton University. “What makes this problem even harder is that we’re now in a world where, especially with climate change, the past is no longer a good guide to the future. We have to get much better at preparing for the unexpected.”

Texas’ main electric grid, which largely operates independently from the rest of the country, has been built with the state’s most common weather extremes in mind: soaring summer temperatures that cause millions of Texans to turn up their air-conditioners all at once.

While freezing weather is rarer, grid operators in Texas have also long known that electricity demand can spike in the winter, particularly after damaging cold snaps in 2011 and 2018. But this week’s winter storms, which buried the state in snow and ice, and led to record-cold temperatures, surpassed all expectations — and pushed the grid to its breaking point.

Residents of East Dallas trying to warm up on Monday after their family home lost power.
Credit: Juan Figueroa/The Dallas Morning News, via Associated Press

Texas’ grid operators had anticipated that, in the worst case, the state would use 67 gigawatts of electricity during the winter peak. But by Sunday evening, power demand had surged past that level. As temperatures dropped, many homes were relying on older, inefficient electric heaters that consume more power.

The problems compounded from there, with frigid weather on Monday disabling power plants with capacity totaling more than 30 gigawatts. The vast majority of those failures occurred at thermal power plants, like natural gas generators, as plummeting temperatures paralyzed plant equipment and soaring demand for natural gas left some plants struggling to obtain sufficient fuel. A number of the state’s power plants were also offline for scheduled maintenance in preparation for the summer peak.

The state’s fleet of wind farms also lost up to 4.5 gigawatts of capacity at times, as many turbines stopped working in cold and icy conditions, though this was a smaller part of the problem.

In essence, experts said, an electric grid optimized to deliver huge quantities of power on the hottest days of the year was caught unprepared when temperatures plummeted.

While analysts are still working to untangle all of the reasons behind Texas’ grid failures, some have also wondered whether the unique way the state manages its largely deregulated electricity system may have played a role. In the mid-1990s, for instance, Texas decided against paying energy producers to hold a fixed number of backup power plants in reserve, instead letting market forces dictate what happens on the grid.

On Tuesday, Gov. Greg Abbott called for an emergency reform of the Electric Reliability Council of Texas, the nonprofit corporation that oversees the flow of power in the state, saying its performance had been “anything but reliable” over the previous 48 hours.

In theory, experts said, there are technical solutions that can avert such problems.

Wind turbines can be equipped with heaters and other devices so that they can operate in icy conditions — as is often done in the upper Midwest, where cold weather is more common. Gas plants can be built to store oil on-site and switch over to burning the fuel if needed, as is often done in the Northeast, where natural gas shortages are common. Grid regulators can design markets that pay extra to keep a larger fleet of backup power plants in reserve in case of emergencies, as is done in the Mid-Atlantic.

But these solutions all cost money, and grid operators are often wary of forcing consumers to pay extra for safeguards.

“Building in resilience often comes at a cost, and there’s a risk of both underpaying but also of overpaying,” said Daniel Cohan, an associate professor of civil and environmental engineering at Rice University. “It’s a difficult balancing act.”

In the months ahead, as Texas grid operators and policymakers investigate this week’s blackouts, they will likely explore how the grid might be bolstered to handle extremely cold weather. Some possible ideas include: Building more connections between Texas and other states to balance electricity supplies, a move the state has long resisted; encouraging homeowners to install battery backup systems; or keeping additional power plants in reserve.

The search for answers will be complicated by climate change. Over all, the state is getting warmer as global temperatures rise, and cold-weather extremes are, on average, becoming less common over time.

But some climate scientists have also suggested that global warming could, paradoxically, bring more unusually fierce winter storms. Some research indicates that Arctic warming is weakening the jet stream, the high-level air current that circles the northern latitudes and usually holds back the frigid polar vortex. This can allow cold air to periodically escape to the South, resulting in episodes of bitter cold in places that rarely get nipped by frost.

ImageCredit: Jacob Ford/Odessa American, via Associated Press

But this remains an active area of debate among climate scientists, with some experts less certain that polar vortex disruptions are becoming more frequent, making it even trickier for electricity planners to anticipate the dangers ahead.

All over the country, utilities and grid operators are confronting similar questions, as climate change threatens to intensify heat waves, floods, water shortages and other calamities, all of which could create novel risks for the nation’s electricity systems. Adapting to those risks could carry a hefty price tag: One recent study found that the Southeast alone may need 35 percent more electric capacity by 2050 simply to deal with the known hazards of climate change.

And the task of building resilience is becoming increasingly urgent. Many policymakers are promoting electric cars and electric heating as a way of curbing greenhouse gas emissions. But as more of the nation’s economy depends on reliable flows of electricity, the cost of blackouts will become ever more dire.

“This is going to be a significant challenge,” said Emily Grubert, an infrastructure expert at Georgia Tech. “We need to decarbonize our power systems so that climate change doesn’t keep getting worse, but we also need to adapt to changing conditions at the same time. And the latter alone is going to be very costly. We can already see that the systems we have today aren’t handling this very well.”

John Schwartz, Dave Montgomery and Ivan Penn contributed reporting.

Climate crisis: world is at its hottest for at least 12,000 years – study (The Guardian)

theguardian.com

Damian Carrington, Environment editor @dpcarrington

Wed 27 Jan 2021 16.00 GMT

The world’s continuously warming climate is revealed also in contemporary ice melt at glaciers, such as with this one in the Kenai mountains, Alaska (seen September 2019). Photograph: Joe Raedle/Getty Images

The planet is hotter now than it has been for at least 12,000 years, a period spanning the entire development of human civilisation, according to research.

Analysis of ocean surface temperatures shows human-driven climate change has put the world in “uncharted territory”, the scientists say. The planet may even be at its warmest for 125,000 years, although data on that far back is less certain.

The research, published in the journal Nature, reached these conclusions by solving a longstanding puzzle known as the “Holocene temperature conundrum”. Climate models have indicated continuous warming since the last ice age ended 12,000 years ago and the Holocene period began. But temperature estimates derived from fossil shells showed a peak of warming 6,000 years ago and then a cooling, until the industrial revolution sent carbon emissions soaring.

This conflict undermined confidence in the climate models and the shell data. But it was found that the shell data reflected only hotter summers and missed colder winters, and so was giving misleadingly high annual temperatures.

“We demonstrate that global average annual temperature has been rising over the last 12,000 years, contrary to previous results,” said Samantha Bova, at Rutgers University–New Brunswick in the US, who led the research. “This means that the modern, human-caused global warming period is accelerating a long-term increase in global temperatures, making today completely uncharted territory. It changes the baseline and emphasises just how critical it is to take our situation seriously.”

The world may be hotter now than any time since about 125,000 years ago, which was the last warm period between ice ages. However, scientists cannot be certain as there is less data relating to that time.

One study, published in 2017, suggested that global temperatures were last as high as today 115,000 years ago, but that was based on less data.

The new research is published in the journal Nature and examined temperature measurements derived from the chemistry of tiny shells and algal compounds found in cores of ocean sediments, and solved the conundrum by taking account of two factors.

First, the shells and organic materials had been assumed to represent the entire year but in fact were most likely to have formed during summer when the organisms bloomed. Second, there are well-known predictable natural cycles in the heating of the Earth caused by eccentricities in the orbit of the planet. Changes in these cycles can lead to summers becoming hotter and winters colder while average annual temperatures change only a little.

Combining these insights showed that the apparent cooling after the warm peak 6,000 years ago, revealed by shell data, was misleading. The shells were in fact only recording a decline in summer temperatures, but the average annual temperatures were still rising slowly, as indicated by the models.

“Now they actually match incredibly well and it gives us a lot of confidence that our climate models are doing a really good job,” said Bova.

The study looked only at ocean temperature records, but Bova said: “The temperature of the sea surface has a really controlling impact on the climate of the Earth. If we know that, it is the best indicator of what global climate is doing.”

She led a research voyage off the coast of Chile in 2020 to take more ocean sediment cores and add to the available data.

Jennifer Hertzberg, of Texas A&M University in the US, said: “By solving a conundrum that has puzzled climate scientists for years, Bova and colleagues’ study is a major step forward. Understanding past climate change is crucial for putting modern global warming in context.”

Lijing Cheng, at the International Centre for Climate and Environment Sciences in Beijing, China, recently led a study that showed that in 2020 the world’s oceans reached their hottest level yet in instrumental records dating back to the 1940s. More than 90% of global heating is taken up by the seas.

Cheng said the new research was useful and intriguing. It provided a method to correct temperature data from shells and could also enable scientists to work out how much heat the ocean absorbed before the industrial revolution, a factor little understood.

The level of carbon dioxide today is at its highest for about 4m years and is rising at the fastest rate for 66m years. Further rises in temperature and sea level are inevitable until greenhouse gas emissions are cut to net zero.

Cálculos mostram que será impossível controlar uma Inteligência Artificial super inteligente (Engenharia é:)

engenhariae.com.br

Ademilson Ramos, 23 de janeiro de 2021


Foto de Alex Knight no Unsplash

A ideia da inteligência artificial derrubar a humanidade tem sido discutida por muitas décadas, e os cientistas acabaram de dar seu veredicto sobre se seríamos capazes de controlar uma superinteligência de computador de alto nível. A resposta? Quase definitivamente não.

O problema é que controlar uma superinteligência muito além da compreensão humana exigiria uma simulação dessa superinteligência que podemos analisar. Mas se não formos capazes de compreendê-lo, é impossível criar tal simulação.

Regras como ‘não causar danos aos humanos’ não podem ser definidas se não entendermos o tipo de cenário que uma IA irá criar, sugerem os pesquisadores. Uma vez que um sistema de computador está trabalhando em um nível acima do escopo de nossos programadores, não podemos mais estabelecer limites.

“Uma superinteligência apresenta um problema fundamentalmente diferente daqueles normalmente estudados sob a bandeira da ‘ética do robô’”, escrevem os pesquisadores.

“Isso ocorre porque uma superinteligência é multifacetada e, portanto, potencialmente capaz de mobilizar uma diversidade de recursos para atingir objetivos que são potencialmente incompreensíveis para os humanos, quanto mais controláveis.”

Parte do raciocínio da equipe vem do problema da parada apresentado por Alan Turing em 1936. O problema centra-se em saber se um programa de computador chegará ou não a uma conclusão e responderá (para que seja interrompido), ou simplesmente ficar em um loop eterno tentando encontrar uma.

Como Turing provou por meio de uma matemática inteligente, embora possamos saber isso para alguns programas específicos, é logicamente impossível encontrar uma maneira que nos permita saber isso para cada programa potencial que poderia ser escrito. Isso nos leva de volta à IA, que, em um estado superinteligente, poderia armazenar todos os programas de computador possíveis em sua memória de uma vez.

Qualquer programa escrito para impedir que a IA prejudique humanos e destrua o mundo, por exemplo, pode chegar a uma conclusão (e parar) ou não – é matematicamente impossível para nós estarmos absolutamente seguros de qualquer maneira, o que significa que não pode ser contido.

“Na verdade, isso torna o algoritmo de contenção inutilizável”, diz o cientista da computação Iyad Rahwan, do Instituto Max-Planck para o Desenvolvimento Humano, na Alemanha.

A alternativa de ensinar alguma ética à IA e dizer a ela para não destruir o mundo – algo que nenhum algoritmo pode ter certeza absoluta de fazer, dizem os pesquisadores – é limitar as capacidades da superinteligência. Ele pode ser cortado de partes da Internet ou de certas redes, por exemplo.

O novo estudo também rejeita essa ideia, sugerindo que isso limitaria o alcance da inteligência artificial – o argumento é que se não vamos usá-la para resolver problemas além do escopo dos humanos, então por que criá-la?

Se vamos avançar com a inteligência artificial, podemos nem saber quando chega uma superinteligência além do nosso controle, tal é a sua incompreensibilidade. Isso significa que precisamos começar a fazer algumas perguntas sérias sobre as direções que estamos tomando.

“Uma máquina superinteligente que controla o mundo parece ficção científica”, diz o cientista da computação Manuel Cebrian, do Instituto Max-Planck para o Desenvolvimento Humano. “Mas já existem máquinas que executam certas tarefas importantes de forma independente, sem que os programadores entendam totalmente como as aprenderam.”

“Portanto, surge a questão de saber se isso poderia em algum momento se tornar incontrolável e perigoso para a humanidade.”

A pesquisa foi publicada no Journal of Artificial Intelligence Research.

Developing Algorithms That Might One Day Be Used Against You (Gizmodo)

gizmodo.com

Ryan F. Mandelbaum, Jan 24, 2021


Brian Nord is an astrophysicist and machine learning researcher.
Brian Nord is an astrophysicist and machine learning researcher. Photo: Mark Lopez/Argonne National Laboratory

Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.

Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.

This interview has been edited and condensed for clarity.

Gizmodo: How did you become a physicist interested in AI and its pitfalls?

Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.

Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.

Gizmodo: So what is deep learning? Why is it good and why is it bad?

BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.

On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.

The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.

Gizmodo: When did the light bulb go off in your head that AI could be harmful?

BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.

Gizmodo: How did you react? What did you do?

BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.

It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.

Gizmodo: How is it going so far?

BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.

We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.

Gizmodo: Why haven’t we reached that critical mass yet?

BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.

Gizmodo: What is your vision for a future of ethical AI?

BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.

Gizmodo: What are your biggest fears about all of this?

BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.

Gizmodo: How has thinking about AI ethics affected your own research?

BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.

I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.

Gizmodo: Anything else?

BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.

This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.

And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?

Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds

Papa Francisco pede orações para robôs e IA (Tecmundo)

11/11/2020 às 18:30 1 min de leitura

Imagem de: Papa Francisco pede orações para robôs e IA

Jorge Marin

O Papa Francisco pediu aos fiéis do mundo inteiro para que, durante o mês de novembro, rezem para que o progresso da robótica e da inteligência artificial (IA) possam sempre servir a humanidade.

A mensagem faz parte de uma série de intenções de oração que o pontífice divulga anualmente, e compartilha a cada mês no YouTube para auxiliar os católicos a “aprofundar sua oração diária”, concentrando-se em tópicos específicos. Em setembro, o papa pediu orações para o “compartilhamento dos recursos do planeta”; em agosto, para o “mundo marítimo”; e agora chegou a vez dos robôs e da IA.

Na sua mensagem, o Papa Francisco pediu uma atenção especial para a IA que, segundo ele, está “no centro da mudança histórica que estamos experimentando”. E que não se trata apenas dos benefícios que a robótica pode trazer para o mundo.

Progresso tecnológico e algoritmos

Francisco afirma que nem sempre o progresso tecnológico é sinal de bem-estar para a humanidade, pois, se esse progresso contribuir para aumentar as desigualdades, não poderá ser considerado como um progresso verdadeiro. “Os avanços futuros devem ser orientados para o respeito à dignidade da pessoa”, alerta o papa.

A preocupação com que a tecnologia possa aumentar as divisões sociais já existentes levou o Vaticano assinar no início deste ano, em conjunto com a Microsoft e a IBM, a “Chamada de Roma por Ética de IA”, um documento em que são fixados alguns princípios para orientar a implantação da IA: transparência, inclusão, imparcialidade e confiabilidade.

Mesmo pessoas não religiosas são capazes de reconhecer que, quando se trata de implantar algoritmos, a preocupação do papa faz todo o sentido.

How will AI shape our lives post-Covid? (BBC)

Original article

BBC, 09 Nov 2020

Audrey Azoulay: Director-General, Unesco
How will AI shape our lives post-Covid?

Covid-19 is a test like no other. Never before have the lives of so many people around the world been affected at this scale or speed.

Over the past six months, thousands of AI innovations have sprung up in response to the challenges of life under lockdown. Governments are mobilising machine-learning in many ways, from contact-tracing apps to telemedicine and remote learning.

However, as the digital transformation accelerates exponentially, it is highlighting the challenges of AI. Ethical dilemmas are already a reality – including privacy risks and discriminatory bias.

It is up to us to decide what we want AI to look like: there is a legislative vacuum that needs to be filled now. Principles such as proportionality, inclusivity, human oversight and transparency can create a framework allowing us to anticipate these issues.

This is why Unesco is working to build consensus among 193 countries to lay the ethical foundations of AI. Building on these principles, countries will be able to develop national policies that ensure AI is designed, developed and deployed in compliance with fundamental human values.

As we face new, previously unimaginable challenges – like the pandemic – we must ensure that the tools we are developing work for us, and not against us.

Geoengenharia solar não deve ser descartada, segundo cientistas (TecMundo)

03/11/2020 às 19:00 3 min de leitura

Imagem de: Geoengenharia solar não deve ser descartada, segundo cientistas

Reinaldo Zaruvni

Antes encaradas com desconfiança pela comunidade científica, as metodologias de intervenção artificial no meio ambiente com o objetivo de frear os efeitos devastadores do aquecimento global estão sendo consideradas agora como recursos a serem aplicados em última instância (já que iniciativas para reduzir a emissão de gases dependem diretamente da ação coletiva e demandam décadas para que tenham algum tipo de efeito benéfico). É possível que não tenhamos esse tempo, de acordo com alguns pesquisadores da área, os quais têm atraído investimentos e muita atenção.

Fazendo parte de um campo também referenciado como geoengenharia solar, grande parte dos métodos se vale da emissão controlada de partículas na atmosfera, responsáveis por barrar a energia recebida pelo nosso planeta e direcioná-la novamente ao espaço, criando uma espécie de resfriamento semelhante ao gerado por erupções vulcânicas.

Ainda que não atuem sobre a poluição, por exemplo, cientistas consideram que, diante de tempestades cada vez mais agressivas, tornados de fogo, inundações e outros desastres naturais, tais ações seriam interessantes enquanto soluções mais eficazes não são desenvolvidas.

Diretor do Sabin Center for Climate Change Law, na Columbia Law School, e editor de um livro sobre a tecnologia e suas implicações legais, Michael Gerrard exemplificou a situação em entrevista ao The New York Times: “Estamos enfrentando uma ameaça existencial. Por isso, é necessário que analisemos todas as opções”.

“Gosto de comparar a geoengenharia a uma quimioterapia para o planeta: se todo o resto estiver falhando, resta apenas tentar”, ele defendeu.

Desastres naturais ocasionados pelo aquecimento global tornam intervenções urgentes, defendem pesquisadores.

Desastres naturais ocasionados pelo aquecimento global tornam urgente a ação de intervenções, segundo pesquisadores. Fonte:  Unsplash 

Dois pesos e duas medidas

Entre aquelas que se destacam, pode ser citada a ação empreendida por uma organização não governamental chamada SilverLining, que concedeu US$ 3 milhões a diversas universidades e outras instituições para que se dediquem à busca de respostas para questões práticas. Um exemplo é encontrar a altitude ideal para a aplicação de aerossóis e como inserir a quantidade mais indicada, verificando seus efeitos sobre a cadeia de produção de alimentos mundial.

Chris Sacca, cofundador da Lowercarbon Capital, um grupo de investimentos que é um dos financiadores da SilverLining, declarou em tom alarmista: “A descarbonização é necessária, mas vai demorar 20 anos ou mais para que ocorra. Se não explorarmos intervenções climáticas como a reflexão solar neste momento, condenaremos um número incontável de vidas, espécies e ecossistemas ao calor”.

Outra contemplada por somas substanciais foi a National Oceanic and Atmospheric Administration, que recebeu do congresso norte-americano US$ 4 milhões justamente para o desenvolvimento de tecnologias do tipo, assim como o monitoramento de uso secreto de tais soluções por outros países.

Douglas MacMartin, pesquisador de Engenharia Mecânica e aeroespacial na Universidade Cornell, afirmou que “é certo o poder da humanidade de resfriar as coisas, mas o que não está claro é o que vem a seguir”.

Se, por um lado, planeta pode ser resfriado artificialmente, por outro não se sabe o que virá.

Se, por um lado, o planeta pode ser resfriado artificialmente; por outro, não se sabe o que virá. Fonte:  Unsplash 

Existe uma maneira

Para esclarecer as possíveis consequências de intervenções dessa magnitude, MacMartin desenvolverá modelos de efeitos climáticos específicos oriundos da injeção de aerossóis na atmosfera acima de diferentes partes do globo e altitudes. “Dependendo de onde você colocar [a substância], terá efeitos diferentes nas monções na Ásia e no gelo marinho do Ártico“, ele apontou.

O Centro Nacional de Pesquisa Atmosférica em Boulder, Colorado, financiado também pela SilverLining, acredita ter o sistema ideal para isso — o qual é considerado o mais sofisticado do mundo. Com ele, serão executadas centenas de simulações e, assim, especialistas procurarão o que chamam de ponto ideal, no qual a quantidade de resfriamento artificial que pode reduzir eventos climáticos extremos não cause mudanças mais amplas nos padrões regionais de precipitação ou impactos semelhantes.

“Existe uma maneira, pelo menos em nosso modelo de mundo, de ver se podemos alcançar um sem acionar demais o outro?” questionou Jean-François Lamarque, diretor do laboratório de Clima e Dinâmica Global da instituição. Ainda não há resposta para essa dúvida, mas soluções sustentáveis estão sendo analisadas por pesquisadores australianos, que utilizariam a emissão de água salgada para tornar nuvens mais reflexivas, assim indicando resultados promissores de testes.

Dessa maneira, quem sabe as perdas de corais de recife que testemunhamos tenham data para acabar. Quanto ao resto, bem, só o tempo mostrará.

Rafael Muñoz: Brasil paga alto preço pela falta de política integrada de gestão de risco de desastres (Folha de S.Paulo)

www1.folha.uol.com.br – 20 de outubro de 2020

É urgente que se integrem tais questões às amplas políticas de desenvolvimento socioeconômico
Enchente em Itaoca, 2014. Fonte: Agência Brasil

Foi em janeiro de 2011 que o mito de que no Brasil não há desastre foi por terra. Chuvas torrenciais registradas na Região Serrana do Rio de Janeiro provocaram deslizamentos de terra e inundações, deixando um rastro de mais de mil mortos. O ocorrido mostrou a necessidade de priorização da agenda de riscos de desastres que fora, por muito tempo, secundária frente à falta de conhecimento dos reais impactos dos eventos naturais extremos na sociedade e economia brasileiras.

Nesse contexto, o Banco Mundial em parceria com a Sedec (Secretaria Nacional de Proteção e Defesa Civil) e a UFSC (Universidade Federal de Santa Catarina) conduziu uma análise detalhada de eventos de desastres passados demostrando a real dimensão do problema: entre 1995 e 2019, o Brasil perdeu em média mensalmente cerca de R$ 1,1 bilhão devido a desastres, ou seja, os prejuízos totais para o período são estimados em cerca de R$ 330 bilhões.

Desse total, 20% são perdas direitas (ou danos), a ampla maioria (59%) no setor de infraestrutura enquanto o de habitação responde por 37%. Já as perdas indiretas (ou prejuízos) correspondem a aproximadamente 80% do valor total dos impactos de desastre no país, mais marcantes na agricultura (R$ 149,8 bilhões) e pecuária (R$ 55,7 bilhões) pelo setor privado e água e transporte (R$ 31,9 bilhões) pelo setor público. Em relação aos impactos humanos, a conta é também significava: 4.065 mortes, 7,4 milhões de pessoas temporária ou permanentemente fora de suas casas devido a danos e mais de 276 milhões de pessoas afetadas.

Para além das perdas humanas e econômicas, as políticas públicas para a promoção de avanços socioeconômicos também podem ter sua eficácia reduzida dado que os eventos de desastres comprovadamente afetam indicadores de saúde, poder de compra, acesso a emprego e renda, educação, dentre outros. Investimentos vitais em infraestruturas críticas, como transportes e habitação, também são massivamente impactados devido a ocorrência de desastres.

Diante deste cenário, surge a inevitável pergunta: por que o Brasil ainda não tem uma política integrada de gestão de riscos de desastres e um Plano Nacional de Proteção e Defesa Civil? De forma a assegurar os tão necessários avanços, a atual gestão da Sedec definiu como prioridade a regulamentação da Lei 12.608/2012 que institui a Política Nacional de Proteção e Defesa Civil, bem como a formulação do Plano Nacional de Proteção e Defesa Civil.

Tais ações podem configurar um arcabouço legal e de diretrizes que venham a fomentar melhorias estruturais em políticas públicas. Por exemplo, no setor de habitação pode-se definir protocolos de incorporação de produtos de mapeamento de riscos em decisões de novos investimentos ou mitigação de riscos de desastres em projetos já entregues. No campo do planejamento fiscal, orçamentos mais condizentes com os impactos econômicos de desastres podem ser definidos no exercício de cada ano com vistas a melhor proteger a economia nacional e subnacional. Por fim, investimentos em infraestruturas críticas (por exemplo transportes, água e saneamento, geração e distribuição de energia) bem como manutenção das mesmas sob a ótica de exposição e vulnerabilidade a perigos naturais podem assegurar a continuidade da operação e de negócios em situações extremas, permitindo que serviços essenciais continuem a ser providos à população e que os impactos indiretos na economia sejam reduzidos.

Dado o aumento da frequência e impactos socioeconômicos dos eventos naturais extremos, existe consenso entre os especialistas que o processo de rápida urbanização favoreceu a criação de um cenário mais propício à ocorrência de desastres devido à ocupação inadequada do solo em áreas com perigos naturais e sem o devido tratamento de obras civis para gestão dos processos naturais. Ao mesmo tempo que esse processo levou a uma alta exposição de comunidades vulneráveis no território nacional e no momento em que analisamos os impactos da pandemia de Covid-19 em nossa economia e comunidades, não podemos deixar de considerar como os desastres vêm influenciando (negativamente) há muito tempo as políticas públicas em nosso país.

Felizmente avanços na coleta de dados e evidências permitem agora que os eventos de desastres e seus impactos estejam sob a luz do conhecimento técnico e em posse dos legisladores, administradores públicos e tomadores de decisões por meio de mapas de riscos, previsão de clima e tempo, modelos de inundações e deslizamentos, bem como fóruns de discussão e projetos de financiamento.

Nesse contexto, fica clara a necessidade de adaptação dos modelos de sucesso de gestão de riscos de desastres observados globalmente às características do Brasil. De forma geral, a extensão do território nacional, modelo federalista de administração pública, histórico de eventos de desastres de menor escala e alta frequência cumulativa, dentre outros, implica na necessidade de definição do papel da União e dos governos estaduais e municipais na agenda.

Assim, é urgente que se integre as questões de gestão de riscos de desastres às amplas políticas de desenvolvimento socioeconômico, tais como programas de habitação, planejamento e expansão urbana, investimentos em infraestruturas críticas, incentivos agropecuários, transferência de renda, entre outros.

Adicionalmente, há real oportunidade em se repensar processos de recuperação segundo a ótica de reconstrução melhor (em inglês, Build Back Better) de forma a assegurar que erros do passado não sejam repetidos gerando ou mantendo-se os patamares de riscos de desastres.

Esta coluna foi escrita em colaboração com Frederico Pedroso, especialista em Gestão de Riscos de Desastres do Banco Mundial, Joaquin Toro, especialista líder em Gestão de Riscos de Desastres do Banco Mundial e Rafael Schadeck, engenheiro civil e consultor em Gestão de Riscos de Desastres do Banco Mundial.

Science and Policy Collide During the Pandemic (The Scientist)

Science and Policy Collide During the Pandemic
ABOVE: MODIFIED FROM © istock.com, VASELENA
COVID-19 has laid bare some of the pitfalls of the relationship between scientific experts and policymakers—but some researchers say there are ways to make it better.

Diana Kwon

Sep 1, 2020

Science has taken center stage during the COVID-19 pandemic. Early on, as SARS-CoV-2 started spreading around the globe, many researchers pivoted to focus on studying the virus. At the same time, some scientists and science advisors—experts responsible for providing scientific information to policymakers—gained celebrity status as they calmly and cautiously updated the public on the rapidly evolving situation and lent their expertise to help governments make critical decisions, such as those relating to lockdowns and other transmission-slowing measures.

“Academia, in the case of COVID, has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible,” says Chris Tyler, the director of research and policy in University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP). 

But the pace at which COVID-related science has been conducted and disseminated during the pandemic has also revealed the challenges associated with translating fast-accumulating evidence for an audience not well versed in the process of science. As research findings are speedily posted to preprint servers, preliminary results have made headlines in major news outlets, sometimes without the appropriate dose of scrutiny.

Some politicians, such as Brazil’s President Jair Bolsonaro, have been quick to jump on premature findings, publicly touting the benefits of treatments such as hydroxychloroquine with minimal or no supporting evidence. Others have pointed to the flip-flopping of the current state of knowledge as a sign of scientists’ untrustworthiness or incompetence—as was seen, for example, in the backlash against Anthony Fauci, one of the US government’s top science advisors. 

Some comments from world leaders have been even more concerning. “For me, the most shocking thing I saw,” Tyler says, “was Donald Trump suggesting the injection of disinfectant as a way of treating COVID—that was an eye-popping, mind-boggling moment.” 

Still, Tyler notes that there are many countries in which the relationship between the scientific community and policymakers during the course of the pandemic has been “pretty impressive.” As an example, he points to Germany, where the government has both enlisted and heeded the advice of scientists across a range of disciplines, including epidemiology, virology, economics, public health, and the humanities.

Researchers will likely be assessing the response to the pandemic for years to come. In the meantime, for scientists interested in getting involved in policymaking, there are lessons to be learned, as well some preliminary insights from the pandemic that may help to improve interactions between scientists and policymakers and thereby pave the way to better evidence-based policy. 

Cultural divisions between scientists and policymakers

Even in the absence of a public-health emergency, there are several obstacles to the smooth implementation of scientific advice into policy. One is simply that scientists and policymakers are generally beholden to different incentive systems. “Classically, a scientist wants to understand something for the sake of understanding, because they have a passion toward that topic—so discovery is driven by the value of discovery,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Whereas the policymaker has a much more utilitarian approach. . . . They have to come up with interventions that produce the best outcomes for the most people.”

Scientists and policymakers are operating on considerably different timescales, too. “Normally, research programs take months and years, whereas policy decisions take weeks and months, sometimes days,” Tyler says. “This discrepancy makes it much more difficult to get scientifically generated knowledge into the policymaking process.” Tyler adds that the two groups deal with uncertainty in very different ways: academics are comfortable with it, as measuring uncertainty is part of the scientific process, whereas policymakers tend to view it as something that can cloud what a “right” answer might be. 

This cultural mismatch has been particularly pronounced during the COVID-19 pandemic. Even as scientists work at breakneck speeds, many crucial questions about COVID-19—such as how long immunity to the virus lasts, and how much of a role children play in the spread of infection—remain unresolved, and policy decisions have had to be addressed with limited evidence, with advice changing as new research emerges. 

“We have seen the messy side of science, [that] not all studies are equally well-done and that they build over time to contribute to the weight of knowledge,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. “The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions.” 

Academia has done an amazing job of trying to get as much information  relevant to COVID gathered and distributed into the policymaking process as possible. —Chris Tyler, University College London

Widespread mask use, for example, was initially discouraged by many politicians and public health officials due to concerns about a shortage of supplies for healthcare workers and limited data on whether mask use by the general public would help reduce the spread of the virus. At the time, there were few mask-wearing laws outside of East Asia, where such practices were commonplace long before the COVID-19 pandemic began.  

Gradually, however, as studies began to provide evidence to support the use of face coverings as a means of stemming transmission, scientists and public health officials started to recommend their use. This shift led local, state, and federal officials around the world to implement mandatory mask-wearing rules in certain public spaces. Some politicians, however, used this about-face in advice as a reason to criticize health experts.  

“We’re dealing with evidence that is changing very rapidly,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “I think there’s a risk of people perceiving that rapid evolution as science [being] a bad process, which is worrisome.” On the other hand, the spotlight the pandemic has put on scientists provides opportunities to educate the general public and policymakers about the scientific process, Azad adds. It’s important to help them understand that “it’s good that things are changing, because it means we’re paying attention to the new evidence as it comes out.”

Bringing science and policy closer together

Despite these challenges, science and policy experts say that there are both short- and long-term ways to improve the relationship between the two communities and to help policymakers arrive at decisions that are more evidence-based.

Better tools, for one, could help close the gap. Earlier this year, Ruggeri brought together a group of people from a range of disciplines, including medicine, engineering, economics, and policy, to develop the Theoretical, Empirical, Applicable, Replicable, Impact (THEARI) rating system, a five-tiered framework for evaluating the robustness of scientific evidence in the context of policy decisions. The ratings range from “theoretical” (the lowest level, where a scientifically viable idea has been proposed but not tested) to “impact” (the highest level, in which a concept has been successfully tested, replicated, applied, and validated in the real world).

The team developed THEARI partly to establish a “common language” across scientific disciplines, which Ruggeri says would be particularly useful to policymakers evaluating evidence from a field they may know little about. Ruggeri hopes to see the THEARI framework—or something like it—adopted by policymakers and policy advisors, and even by journals and preprint servers. “I don’t necessarily think [THEARI] will be used right away,” he says. “It’d be great if it was, but we . . . [developed] it as kind of a starting point.” 

Other approaches to improve the communication between scientists and policymakers may require more resources and time. According to Akerlof, one method could include providing better incentives for both parties to engage with each other—by offering increased funding for academics who take part in this kind of activity, for instance—and boosting opportunities for such interactions to happen. 

Akerlof points to the American Association for the Advancement of Science’s Science & Technology Policy Fellowships, which place scientists and engineers in various branches of the US government for a year, as an example of a way in which important ties between the two communities could be forged. “Many of those scientists either stay in government or continue to work in science policy in other organizations,” Akerlof says. “By understanding the language and culture of both the scientific and policy communities, they are able to bridge between them.”  

In Canada, such a program was established in 2018, when the Canadian Science Policy Center and Mona Nemer, Canada’s Chief Science Advisor, held the country’s first “Science Meets Parliament” event. The 28 scientists in attendance, including Azad, spent two days learning about effective communication and the policymaking process, and interacting with senators and members of parliament. “It was eye opening for me because I didn’t know how parliamentarians really live and work,” Azad says. “We hope it’ll grow and involve more scientists and continue on an annual basis . . . and also happen at the provincial level.”

The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions. —Karen Akerlof, George Mason University

There may also be insights from scientist-policymaker exchanges in other domains that experts can apply to the current pandemic. Maria Carmen Lemos, a social scientist focused on climate policy at the University of Michigan, says that one way to make those interactions more productive is by closing something she calls the “usability gap.”

“The usability gap highlights the fact that one of the reasons that research fails to connect is because [scientists] only pay attention to the [science],” Lemos explains. “We are putting everything out there in papers, in policy briefs, in reports, but rarely do we actually systematically and intentionally try to understand who is on the other side” receiving this information, and what they will do with it.

The way to deal with this usability gap, according to Lemos, is for more scientists to consult the people who actually make, influence, and implement policy changes early on in the scientific process. Lemos and her team, for example, have engaged in this way with city officials, farmers, forest managers, tribal leaders, and others whose decision making would directly benefit from their work. “We help with organization and funding, and we also work with them very closely to produce climate information that is tailored for them, for the problems that they are trying to solve,” she adds. 

Azad applied this kind of approach in a study that involves assessing the effects of the pandemic on a cohort of children that her team has been following from infancy, starting in 2010. When she and her colleagues were putting together the proposal for the COVID-19 project this year, they reached out to public health decision makers across the Canadian provinces to find out what information would be most useful. “We have made sure to embed those decision makers in the project from the very beginning to ensure we’re asking the right questions, getting the most useful information, and getting it back to them in a very quick turnaround manner,” Azad says. 

There will also likely be lessons to take away from the pandemic in the years to come, notes Noam Obermeister, a PhD student studying science policy at the University of Cambridge. These include insights from scientific advisors about how providing guidance to policymakers during COVID-19 compared to pre-pandemic times, and how scientists’ prominent role during the pandemic has affected how they are viewed by the public; efforts to collect this sort of information are already underway. 

“I don’t think scientists anticipated that much power and visibility, or that [they] would be in [public] saying science is complicated and uncertain,” Obermeister says. “I think what that does to the authority of science in the public eye is still to be determined.”

Talking Science to PolicymakersFor academics who have never engaged with policymakers, the thought of making contact may be daunting. Researchers with experience of these interactions share their tips for success.
1. Do your homework. Policymakers usually have many different people vying for their time and attention. When you get a meeting, make sure you make the most of it. “Find out which issues related to your research are a priority for the policymaker and which decisions are on the horizon,” says Karen Akerlof, a professor of environmental science and policy at George Mason University.
2. Get to the point, but don’t oversimplify. “I find policymakers tend to know a lot about the topics they work on, and when they don’t, they know what to ask about,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Finding a good balance in the communication goes a long way.”
3. Keep in mind that policymakers’ expertise differs from that of scientists. “Park your ego at the door and treat policymakers and their staff with respect,” Akerlof says. “Recognize that the skills, knowledge, and culture that translate to success in policy may seem very different than those in academia.” 
4. Be persistent. “Don’t be discouraged if you don’t get a response immediately, or if promising communications don’t pan out,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “Policymakers are busy and their attention shifts rapidly. Meetings get cancelled. It’s not personal. Keep trying.”
5. Remember that not all policymakers are politicians, and vice versa. Politicians are usually elected and are affiliated with a political party, and they may not always be directly involved in creating new policies. This is not the case for the vast majority of policymakers—most are career civil servants whose decisions impact the daily living of constituents, Ruggeri explains. 

A Supercomputer Analyzed Covid-19 — and an Interesting New Theory Has Emerged (Medium/Elemental)

A closer look at the Bradykinin hypothesis

Thomas Smith, Sept 1, 2020

Original article

3d rendering of multiple coronavirus.
Photo: zhangshuang/Getty Images

Earlier this summer, the Summit supercomputer at Oak Ridge National Lab in Tennessee set about crunching data on more than 40,000 genes from 17,000 genetic samples in an effort to better understand Covid-19. Summit is the second-fastest computer in the world, but the process — which involved analyzing 2.5 billion genetic combinations — still took more than a week.

When Summit was done, researchers analyzed the results. It was, in the words of Dr. Daniel Jacobson, lead researcher and chief scientist for computational systems biology at Oak Ridge, a “eureka moment.” The computer had revealed a new theory about how Covid-19 impacts the body: the bradykinin hypothesis. The hypothesis provides a model that explains many aspects of Covid-19, including some of its most bizarre symptoms. It also suggests 10-plus potential treatments, many of which are already FDA approved. Jacobson’s group published their results in a paper in the journal eLife in early July.

According to the team’s findings, a Covid-19 infection generally begins when the virus enters the body through ACE2 receptors in the nose, (The receptors, which the virus is known to target, are abundant there.) The virus then proceeds through the body, entering cells in other places where ACE2 is also present: the intestines, kidneys, and heart. This likely accounts for at least some of the disease’s cardiac and GI symptoms.

But once Covid-19 has established itself in the body, things start to get really interesting. According to Jacobson’s group, the data Summit analyzed shows that Covid-19 isn’t content to simply infect cells that already express lots of ACE2 receptors. Instead, it actively hijacks the body’s own systems, tricking it into upregulating ACE2 receptors in places where they’re usually expressed at low or medium levels, including the lungs.

In this sense, Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they don’t just take your stuff — they also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.

The renin–angiotensin system (RAS) controls many aspects of the circulatory system, including the body’s levels of a chemical called bradykinin, which normally helps to regulate blood pressure. According to the team’s analysis, when the virus tweaks the RAS, it causes the body’s mechanisms for regulating bradykinin to go haywire. Bradykinin receptors are resensitized, and the body also stops effectively breaking down bradykinin. (ACE normally degrades bradykinin, but when the virus downregulates it, it can’t do this as effectively.)

The end result, the researchers say, is to release a bradykinin storm — a massive, runaway buildup of bradykinin in the body. According to the bradykinin hypothesis, it’s this storm that is ultimately responsible for many of Covid-19’s deadly effects. Jacobson’s team says in their paper that “the pathology of Covid-19 is likely the result of Bradykinin Storms rather than cytokine storms,” which had been previously identified in Covid-19 patients, but that “the two may be intricately linked.” Other papers had previously identified bradykinin storms as a possible cause of Covid-19’s pathologies.

Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house.

As bradykinin builds up in the body, it dramatically increases vascular permeability. In short, it makes your blood vessels leaky. This aligns with recent clinical data, which increasingly views Covid-19 primarily as a vascular disease, rather than a respiratory one. But Covid-19 still has a massive effect on the lungs. As blood vessels start to leak due to a bradykinin storm, the researchers say, the lungs can fill with fluid. Immune cells also leak out into the lungs, Jacobson’s team found, causing inflammation.

And Covid-19 has another especially insidious trick. Through another pathway, the team’s data shows, it increases production of hyaluronic acid (HLA) in the lungs. HLA is often used in soaps and lotions for its ability to absorb more than 1,000 times its weight in fluid. When it combines with fluid leaking into the lungs, the results are disastrous: It forms a hydrogel, which can fill the lungs in some patients. According to Jacobson, once this happens, “it’s like trying to breathe through Jell-O.”

This may explain why ventilators have proven less effective in treating advanced Covid-19 than doctors originally expected, based on experiences with other viruses. “It reaches a point where regardless of how much oxygen you pump in, it doesn’t matter, because the alveoli in the lungs are filled with this hydrogel,” Jacobson says. “The lungs become like a water balloon.” Patients can suffocate even while receiving full breathing support.

The bradykinin hypothesis also extends to many of Covid-19’s effects on the heart. About one in five hospitalized Covid-19 patients have damage to their hearts, even if they never had cardiac issues before. Some of this is likely due to the virus infecting the heart directly through its ACE2 receptors. But the RAS also controls aspects of cardiac contractions and blood pressure. According to the researchers, bradykinin storms could create arrhythmias and low blood pressure, which are often seen in Covid-19 patients.

The bradykinin hypothesis also accounts for Covid-19’s neurological effects, which are some of the most surprising and concerning elements of the disease. These symptoms (which include dizziness, seizures, delirium, and stroke) are present in as many as half of hospitalized Covid-19 patients. According to Jacobson and his team, MRI studies in France revealed that many Covid-19 patients have evidence of leaky blood vessels in their brains.

Bradykinin — especially at high doses — can also lead to a breakdown of the blood-brain barrier. Under normal circumstances, this barrier acts as a filter between your brain and the rest of your circulatory system. It lets in the nutrients and small molecules that the brain needs to function, while keeping out toxins and pathogens and keeping the brain’s internal environment tightly regulated.

If bradykinin storms cause the blood-brain barrier to break down, this could allow harmful cells and compounds into the brain, leading to inflammation, potential brain damage, and many of the neurological symptoms Covid-19 patients experience. Jacobson told me, “It is a reasonable hypothesis that many of the neurological symptoms in Covid-19 could be due to an excess of bradykinin. It has been reported that bradykinin would indeed be likely to increase the permeability of the blood-brain barrier. In addition, similar neurological symptoms have been observed in other diseases that result from an excess of bradykinin.”

Increased bradykinin levels could also account for other common Covid-19 symptoms. ACE inhibitors — a class of drugs used to treat high blood pressure — have a similar effect on the RAS system as Covid-19, increasing bradykinin levels. In fact, Jacobson and his team note in their paper that “the virus… acts pharmacologically as an ACE inhibitor” — almost directly mirroring the actions of these drugs.

By acting like a natural ACE inhibitor, Covid-19 may be causing the same effects that hypertensive patients sometimes get when they take blood pressure–lowering drugs. ACE inhibitors are known to cause a dry cough and fatigue, two textbook symptoms of Covid-19. And they can potentially increase blood potassium levels, which has also been observed in Covid-19 patients. The similarities between ACE inhibitor side effects and Covid-19 symptoms strengthen the bradykinin hypothesis, the researchers say.

ACE inhibitors are also known to cause a loss of taste and smell. Jacobson stresses, though, that this symptom is more likely due to the virus “affecting the cells surrounding olfactory nerve cells” than the direct effects of bradykinin.

Though still an emerging theory, the bradykinin hypothesis explains several other of Covid-19’s seemingly bizarre symptoms. Jacobson and his team speculate that leaky vasculature caused by bradykinin storms could be responsible for “Covid toes,” a condition involving swollen, bruised toes that some Covid-19 patients experience. Bradykinin can also mess with the thyroid gland, which could produce the thyroid symptoms recently observed in some patients.

The bradykinin hypothesis could also explain some of the broader demographic patterns of the disease’s spread. The researchers note that some aspects of the RAS system are sex-linked, with proteins for several receptors (such as one called TMSB4X) located on the X chromosome. This means that “women… would have twice the levels of this protein than men,” a result borne out by the researchers’ data. In their paper, Jacobson’s team concludes that this “could explain the lower incidence of Covid-19 induced mortality in women.” A genetic quirk of the RAS could be giving women extra protection against the disease.

The bradykinin hypothesis provides a model that “contributes to a better understanding of Covid-19” and “adds novelty to the existing literature,” according to scientists Frank van de Veerdonk, Jos WM van der Meer, and Roger Little, who peer-reviewed the team’s paper. It predicts nearly all the disease’s symptoms, even ones (like bruises on the toes) that at first appear random, and further suggests new treatments for the disease.

As Jacobson and team point out, several drugs target aspects of the RAS and are already FDA approved to treat other conditions. They could arguably be applied to treating Covid-19 as well. Several, like danazol, stanozolol, and ecallantide, reduce bradykinin production and could potentially stop a deadly bradykinin storm. Others, like icatibant, reduce bradykinin signaling and could blunt its effects once it’s already in the body.

Interestingly, Jacobson’s team also suggests vitamin D as a potentially useful Covid-19 drug. The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN. Again, this could stop potentially deadly bradykinin storms from forming. The researchers note that vitamin D has already been shown to help those with Covid-19. The vitamin is readily available over the counter, and around 20% of the population is deficient. If indeed the vitamin proves effective at reducing the severity of bradykinin storms, it could be an easy, relatively safe way to reduce the severity of the virus.

Other compounds could treat symptoms associated with bradykinin storms. Hymecromone, for example, could reduce hyaluronic acid levels, potentially stopping deadly hydrogels from forming in the lungs. And timbetasin could mimic the mechanism that the researchers believe protects women from more severe Covid-19 infections. All of these potential treatments are speculative, of course, and would need to be studied in a rigorous, controlled environment before their effectiveness could be determined and they could be used more broadly.

Covid-19 stands out for both the scale of its global impact and the apparent randomness of its many symptoms. Physicians have struggled to understand the disease and come up with a unified theory for how it works. Though as of yet unproven, the bradykinin hypothesis provides such a theory. And like all good hypotheses, it also provides specific, testable predictions — in this case, actual drugs that could provide relief to real patients.

The researchers are quick to point out that “the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.” As to the next step in the process, Jacobson is clear: “We have to get this message out.” His team’s finding won’t cure Covid-19. But if the treatments it points to pan out in the clinic, interventions guided by the bradykinin hypothesis could greatly reduce patients’ suffering — and potentially save lives.

The Biblical Flood That Will Drown California (Wired)

Tom Philpott, 08.29.20 8:00 AM

The Great Flood of 1861–1862 was a preview of what scientists expect to see again, and soon.

This story originally appeared on Mother Jones and is part of the Climate Desk collaboration.

In November 1860, a young scientist from upstate New York named William Brewer disembarked in San Francisco after a long journey that took him from New York City through Panama and then north along the Pacific coast. “The weather is perfectly heavenly,” he enthused in a letter to his brother back east. The fast-growing metropolis was already revealing the charms we know today: “large streets, magnificent buildings” adorned by “many flowers we [northeasterners] see only in house cultivations: various kinds of geraniums growing of immense size, dew plant growing like a weed, acacia, fuchsia, etc. growing in the open air.”

Flowery prose aside, Brewer was on a serious mission. Barely a decade after being claimed as a US state, California was plunged in an economic crisis. The gold rush had gone bust, and thousands of restive settlers were left scurrying about, hot after the next ever-elusive mineral bonanza. The fledgling legislature had seen fit to hire a state geographer to gauge the mineral wealth underneath its vast and varied terrain, hoping to organize and rationalize the mad lunge for buried treasure. The potential for boosting agriculture as a hedge against mining wasn’t lost on the state’s leaders. They called on the state geographer to deliver a “full and scientific description of the state’s rocks, fossils, soils, and minerals, and its botanical and zoological productions, together with specimens of same.”

The task of completing the fieldwork fell to the 32-year-old Brewer, a Yale-trained botanist who had studied cutting-edge agricultural science in Europe. His letters home, chronicling his four-year journey up and down California, form one of the most vivid contemporary accounts of its early statehood.

They also provide a stark look at the greatest natural disaster known to have befallen the western United States since European contact in the 16th century: the Great Flood of 1861–1862. The cataclysm cut off telegraph communication with the East Coast, swamped the state’s new capital, and submerged the entire Central Valley under as much as 15 feet of water. Yet in modern-day California—a region that author Mike Davis once likened to a “Book of the Apocalypse theme park,” where this year’s wildfires have already burned 1.4 million acres, and dozens of fires are still raging—the nearly forgotten biblical-scale flood documented by Brewer’s letters has largely vanished from the public imagination, replaced largely by traumatic memories of more recent earthquakes.

When it was thought of at all, the flood was once considered a thousand-year anomaly, a freak occurrence. But emerging science demonstrates that floods of even greater magnitude occurred every 100 to 200 years in California’s precolonial history. Climate change will make them more frequent still. In other words, the Great Flood was a preview of what scientists expect to see again, and soon. And this time, given California’s emergence as agricultural and economic powerhouse, the effects will be all the more devastating.

Barely a year after Brewer’s sunny initial descent from a ship in San Francisco Bay, he was back in the city, on a break. In a November 1861 letter home, he complained of a “week of rain.” In his next letter, two months later, Brewer reported jaw-dropping news: Rain had fallen almost continuously since he had last written—and now the entire Central Valley was underwater. “Thousands of farms are entirely underwater—cattle starving and drowning.”

Picking up the letter nine days later, he wrote that a bad situation had deteriorated. All the roads in the middle of the state are “impassable, so all mails are cut off.” Telegraph service, which had only recently been connected to the East Coast through the Central Valley, stalled. “The tops of the poles are under water!” The young state’s capital city, Sacramento, about 100 miles northeast of San Francisco at the western edge of the valley and the intersection of two rivers, was submerged, forcing the legislature to evacuate—and delaying a payment Brewer needed to forge ahead with his expedition.

The surveyor gaped at the sheer volume of rain. In a normal year, Brewer reported, San Francisco received about 20 inches. In the 10 weeks leading up to January 18, 1862, the city got “thirty-two and three-quarters inches and it is still raining!”

Brewer went on to recount scenes from the Central Valley that would fit in a Hollywood disaster epic. “An old acquaintance, a buccaro [cowboy], came down from a ranch that was overflowed,” he wrote. “The floor of their one-story house was six weeks under water before the house went to pieces.” Steamboats “ran back over the ranches fourteen miles from the [Sacramento] river, carrying stock [cattle], etc., to the hills,” he reported. He marveled at the massive impromptu lake made up of “water ice cold and muddy,” in which “winds made high waves which beat the farm homes in pieces.” As a result, “every house and farm over this immense region is gone.”

Eventually, in March, Brewer made it to Sacramento, hoping (without success) to lay hands on the state funds he needed to continue his survey. He found a city still in ruins, weeks after the worst of the rains. “Such a desolate scene I hope never to see again,” he wrote: “Most of the city is still under water, and has been for three months … Every low place is full—cellars and yards are full, houses and walls wet, everything uncomfortable.” The “better class of houses” were in rough shape, Brewer observed, but “it is with the poorer classes that this is the worst.” He went on: “Many of the one-story houses are entirely uninhabitable; others, where the floors are above the water are, at best, most wretched places in which to live.” He summarized the scene:

Many houses have partially toppled over; some have been carried from their foundations, several streets (now avenues of water) are blocked up with houses that have floated in them, dead animals lie about here and there—a dreadful picture. I don’t think the city will ever rise from the shock, I don’t see how it can.

Brewer’s account is important for more than just historical interest. In the 160 years since the botanist set foot on the West Coast, California has transformed from an agricultural backwater to one of the jewels of the US food system. The state produces nearly all of the almonds, walnuts, and pistachios consumed domestically; 90 percent or more of the broccoli, carrots, garlic, celery, grapes, tangerines, plums, and artichokes; at least 75 percent of the cauliflower, apricots, lemons, strawberries, and raspberries; and more than 40 percent of the lettuce, cabbage, oranges, peaches, and peppers.

And as if that weren’t enough, California is also a national hub for milk production. Tucked in amid the almond groves and vegetable fields are vast dairy operations that confine cows together by the thousands and produce more than a fifth of the nation’s milk supply, more than any other state. It all amounts to a food-production juggernaut: California generates $46 billion worth of food per year, nearly double the haul of its closest competitor among US states, the corn-and-soybean behemoth Iowa.

You’ve probably heard that ever-more more frequent and severe droughts threaten the bounty we’ve come to rely on from California. Water scarcity, it turns out, isn’t the only menace that stalks the California valleys that stock our supermarkets. The opposite—catastrophic flooding—also occupies a niche in what Mike Davis, the great chronicler of Southern California’s sociopolitical geography, has called the state’s “ecology of fear.” Indeed, his classic book of that title opens with an account of a 1995 deluge that saw “million-dollar homes tobogganed off their hill-slope perches” and small children and pets “sucked into the deadly vortices of the flood channels.”

Yet floods tend to be less feared than rival horsemen of the apocalypse in the state’s oft-stimulated imagination of disaster. The epochal 2011–2017 drought, with its missing-in-action snowpacks and draconian water restrictions, burned itself into the state’s consciousness. Californians are rightly terrified of fires like the ones that roared through the northern Sierra Nevada foothills and coastal canyons near Los Angeles in the fall of 2018, killing nearly 100 people and fouling air for miles around, or the current LNU Lightning Complex fire that has destroyed nearly 1,000 structures and killed five people in the region between Sacramento and San Francisco. Many people are frightfully aware that a warming climate will make such conflagrations increasingly frequent. And “earthquake kits” are common gear in closets and garages all along the San Andreas Fault, where the next Big One lurks. Floods, though they occur as often in Southern and Central California as they do anywhere in the United States, don’t generate quite the same buzz.

But a growing body of research shows there’s a flip side to the megadroughts Central Valley farmers face: megafloods. The region most vulnerable to such a water-drenched cataclysm in the near future is, ironically enough, the California’s great arid, sinking food production basin, the beleaguered behemoth of the US food system: the Central Valley. Bordered on all sides by mountains, the Central Valley stretches 450 miles long, is on average 50 miles wide, and occupies a land mass of 18,000 square miles, or 11.5 million acres—roughly equivalent in size to Massachusetts and Vermont combined. Wedged between the Sierra Nevada to the east and the Coast Ranges to the west, it’s one of the globe’s greatest expanses of fertile soil and temperate weather. For most Americans, it’s easy to ignore the Central Valley, even though it’s as important to eaters as Hollywood is to moviegoers or Silicon Valley is to smartphone users. Occupying less than 1 percent of US farmland, the Central Valley churns out a quarter of the nation’s food supply.

At the time of the Great Flood, the Central Valley was still mainly cattle ranches, the farming boom a ways off. Late in 1861, the state suddenly emerged from a two-decade dry spell when monster storms began lashing the West Coast from Baja California to present-day Washington state. In central California, the deluge initially took the form of 10 to 15 feet of snow dumped onto the Sierra Nevada, according to research by the UC Berkeley paleoclimatologist B. Lynn Ingram and laid out in her 2015 book, The West Without Water, cowritten with Frances Malamud-Roam. Ingram has emerged as a kind of Cassandra of drought and flood risks in the western United States. Soon after the blizzards came days of warm, heavy rain, which in turn melted the enormous snowpack. The resulting slurry cascaded through the Central Valley’s network of untamed rivers.

As floodwater gathered in the valley, it formed a vast, muddy, wind-roiled lake, its size “rivaling that of Lake Superior,” covering the entire Central Valley floor, from the southern slopes of the Cascade Mountains near the Oregon border to the Tehachapis, south of Bakersfield, with depths in some places exceeding 15 feet.

At least some of the region’s remnant indigenous population saw the epic flood coming and took precautions to escape devastation, Ingram reports, quoting an item in the Nevada City Democrat on January 11, 1862:

We are informed that the Indians living in the vicinity of Marysville left their abodes a week or more ago for the foothills predicting an unprecedented overflow. They told the whites that the water would be higher than it has been for thirty years, and pointed high up on the trees and houses where it would come. The valley Indians have traditions that the water occasionally rises 15 or 20 feet higher than it has been at any time since the country was settled by whites, and as they live in the open air and watch closely all the weather indications, it is not improbable that they may have better means than the whites of anticipating a great storm.

All in all, thousands of people died, “one-third of the state’s property was destroyed, and one home in eight was destroyed completely or carried away by the floodwaters.” As for farming, the 1862 megaflood transformed valley agriculture, playing a decisive role in creating today’s Anglo-dominated, crop-oriented agricultural powerhouse: a 19th-century example of the “disaster capitalism” that Naomi Klein describes in her 2007 book, The Shock Doctrine.

Prior to the event, valley land was still largely owned by Mexican rancheros who held titles dating to Spanish rule. The 1848 Treaty of Guadalupe Hidalgo, which triggered California’s transfer from Mexican to US control, gave rancheros US citizenship and obligated the new government to honor their land titles. The treaty terms met with vigorous resentment from white settlers eager to shift from gold mining to growing food for the new state’s burgeoning cities. The rancheros thrived during the gold rush, finding a booming market for beef in mining towns. By 1856, their fortunes had shifted. A severe drought that year cut production, competition from emerging US settler ranchers meant lower prices, and punishing property taxes—imposed by land-poor settler politicians—caused a further squeeze. “As a result, rancheros began to lose their herds, their land, and their homes,” writes the historian Lawrence James Jelinek.

The devastation of the 1862 flood, its effects magnified by a brutal drought that started immediately afterward and lasted through 1864, “delivered the final blow,” Jelinek writes. Between 1860 and 1870, California’s cattle herd, concentrated in the valley, plunged from 3 million to 630,000. The rancheros were forced to sell their land to white settlers at pennies per acre, and by 1870 “many rancheros had become day laborers in the towns,” Jelinek reports. The valley’s emerging class of settler farmers quickly turned to wheat and horticultural production and set about harnessing and exploiting the region’s water resources, both those gushing forth from the Sierra Nevada and those beneath their feet.

Despite all the trauma it generated and the agricultural transformation it cemented in the Central Valley, the flood quickly faded from memory in California and the broader United States. To his shocked assessment of a still-flooded and supine Sacramento months after the storm, Brewer added a prophetic coda:

No people can so stand calamity as this people. They are used to it. Everyone is familiar with the history of fortunes quickly made and as quickly lost. It seems here more than elsewhere the natural order of things. I might say, indeed, that the recklessness of the state blunts the keener feelings and takes the edge from this calamity.

Indeed, the new state’s residents ended up shaking off the cataclysm. What lesson does the Great Flood of 1862 hold for today? The question is important. Back then, just around 500,000 people lived in the entire state, and the Central Valley was a sparsely populated badland. Today, the valley has a population of 6.5 million people and boasts the state’s three fastest-growing counties. Sacramento (population 501,344), Fresno (538,330), and Bakersfield (386,839) are all budding metropolises. The state’s long-awaited high-speed train, if it’s ever completed, will place Fresno residents within an hour of Silicon Valley, driving up its appeal as a bedroom community.

In addition to the potentially vast human toll, there’s also the fact that the Central Valley has emerged as a major linchpin of the US and global food system. Could it really be submerged under fifteen feet of water again—and what would that mean?

In less than two centuries as a US state, California has maintained its reputation as a sunny paradise while also enduring the nation’s most erratic climate: the occasional massive winter storm roaring in from the Pacific; years-long droughts. But recent investigations into the fossil record show that these past years have been relatively stable.

One avenue of this research is the study of the regular megadroughts, the most recent of which occurred just a century before Europeans made landfall on the North American west coast. As we are now learning, those decades-long arid stretches were just as regularly interrupted by enormous storms—many even grander than the one that began in December 1861. (Indeed, that event itself was directly preceded and followed by serious droughts.) In other words, the same patterns that make California vulnerable to droughts also make it ripe for floods.

Beginning in the 1980s, scientists including B. Lynn Ingram began examining streams and banks in the enormous delta network that together serve as the bathtub drain through which most Central Valley runoff has flowed for millennia, reaching the ocean at the San Francisco Bay. (Now-vanished Tulare Lake gathered runoff in the southern part of the valley.) They took deep-core samples from river bottoms, because big storms that overflow the delta’s banks transfer loads of soil and silt from the Sierra Nevada and deposit a portion of it in the Delta. They also looked at fluctuations in old plant material buried in the sediment layers. Plant species that thrive in freshwater suggest wet periods, as heavy runoff from the mountains crowds out seawater. Salt-tolerant species denote dry spells, as sparse mountain runoff allows seawater to work into the delta.

What they found was stunning. The Great Flood of 1862 was no one-off black-swan event. Summarizing the science, Ingram and USGS researcher Michael Dettinger deliver the dire news: A flood comparable to—and sometimes much more intense than—the 1861–1862 catastrophe occurred sometime between 1235–1360, 1395–1410, 1555–1615, 1750–1770, and 1810–1820; “that is, one megaflood every 100 to 200 years.” They also discovered that the 1862 flood didn’t appear in the sediment record in some sites that showed evidence of multiple massive events—suggesting that it was actually smaller than many of the floods that have inundated California over the centuries.

During its time as a US food-production powerhouse, California has been known for its periodic droughts and storms. But Ingram and Dettinger’s work pulls the lens back to view the broader timescale, revealing the region’s swings between megadroughts and megastorms—ones more than severe enough to challenge concentrated food production, much less dense population centers.

The dynamics of these storms themselves explain why the state is also prone to such swings. Meteorologists have known for decades that those tempests that descend upon California over the winter—and from which the state receives the great bulk of its annual precipitation—carry moisture from the South Pacific. In the late 1990s, scientists discovered that these “pineapple expresses,” as TV weather presenters call them, are a subset of a global weather phenomenon: long, wind-driven plumes of vapor about a mile above the sea that carry moisture from warm areas near the equator on a northeasterly path to colder, drier regions toward the poles. They carry so much moisture—often more than 25 times the flow of the Mississippi River, over thousands of miles—that they’ve been dubbed “atmospheric rivers.”

In a pioneering 1998 paper, researchers Yong Zhu and Reginald E. Newell found that nearly all the vapor transport between the subtropics (regions just south or north of the equator, depending on the hemisphere) toward the poles occurred in just five or six narrow bands. And California, it turns out, is the prime spot in the western side of the northern hemisphere for catching them at full force during the winter months.

As Ingram and Dettinger note, atmospheric rivers are the primary vector for California’s floods. That includes pre-Columbian cataclysms as well as the Great Flood of 1862, all the way to the various smaller ones that regularly run through the state. Between 1950 and 2010, Ingram and Dettinger write, atmospheric rivers “caused more than 80 percent of flooding in California rivers and 81 percent of the 128 most well-documented levee breaks in California’s Central Valley.”

Paradoxically, they are at least as much a lifeblood as a curse. Between eight and 11 atmospheric rivers hit California every year, the great majority of them doing no major damage, and they deliver between 30 and 50 percent of the state’s rain and snow. But the big ones are damaging indeed. Other researchers are reaching similar conclusions. In a study released in December 2019, a team from the US Army Corps of Engineers and the Scripps Institution of Oceanography found that atmospheric-river storms accounted for 84 percent of insured flood damages in the western United States between 1978 and 2017; the 13 biggest storms wrought more than half the damage.

So the state—and a substantial portion of our food system—exists on a razor’s edge between droughts and floods, its annual water resources decided by massive, increasingly fickle transfers of moisture from the South Pacific. As Dettinger puts it, the “largest storms in California’s precipitation regime not only typically end the state’s frequent droughts, but their fluctuations also cause those droughts in the first place.”

We know that before human civilization began spewing millions of tons of greenhouse gases into the atmosphere annually, California was due “one megaflood every 100 to 200 years”—and the last one hit more than a century and a half ago. What happens to this outlook when you heat up the atmosphere by 1 degree Celsius—and are on track to hit at least another half-degree Celsius increase by midcentury?

That was the question posed by Daniel Swain and a team of researchers at UCLA’s Department of Atmospheric and Oceanic Sciences in a series of studies, the first of which was published in 2018. They took California’s long pattern of droughts and floods and mapped it onto the climate models based on data specific to the region, looking out to century’s end.

What they found isn’t comforting. As the tropical Pacific Ocean and the atmosphere just above it warm, more seawater evaporates, feeding ever bigger atmospheric rivers gushing toward the California coast. As a result, the potential for storms on the scale of the ones that triggered the Great Flood has increased “more than threefold,” they found. So an event expected to happen on average every 200 years will now happen every 65 or so. It is “more likely than not we will see one by 2060,” and it could plausibly happen again before century’s end, they concluded.

As the risk of a catastrophic event increases, so will the frequency of what they call “precipitation whiplash”: extremely wet seasons interrupted by extremely dry ones, and vice versa. The winter of 2016–2017 provides a template. That year, a series of atmospheric-river storms filled reservoirs and at one point threatened a major flood in the northern Central Valley, abruptly ending the worst multiyear drought in the state’s recorded history.

Swings on that magnitude normally occur a handful of times each century, but in the model by Swain’s team, “it goes from something that happens maybe once in a generation to something that happens two or three times,” he told me in an interview. “Setting aside a repeat of 1862, these less intense events could still seriously test the limits of our water infrastructure.” Like other efforts to map climate change onto California’s weather, this one found that drought years characterized by low winter precipitation would likely increase—in this case, by a factor of as much as two, compared with mid-20th-century patterns. But extreme-wet winter seasons, accumulating at least as much precipitation as 2016–2017, will grow even more: they could be three times as common as they were before the atmosphere began its current warming trend.

While lots of very wet years—at least the ones that don’t reach 1861–1862 levels—might sound encouraging for food production in the Central Valley, there’s a catch, Swain said. His study looked purely at precipitation, independent of whether it fell as rain or snow. A growing body of research suggests that as the climate warms, California’s precipitation mix will shift significantly in favor of rain over snow. That’s dire news for our food system, because the Central Valley’s vast irrigation networks are geared to channeling the slow, predictable melt of the snowpack into usable water for farms. Water that falls as rain is much harder to capture and bend to the slow-release needs of agriculture.

In short, California’s climate, chaotic under normal conditions, is about to get weirder and wilder. Indeed, it’s already happening.

What if an 1862-level flood, which is overdue and “more likely than not” to occur with a couple of decades, were to hit present-day California?

Starting in 2008, the USGS set out to answer just that question, launching a project called the ARkStorm (for “atmospheric river 1,000 storm”) Scenario. The effort was modeled on a previous USGS push to get a grip on another looming California cataclysm: a massive earthquake along the San Andreas Fault. In 2008, USGS produced the ShakeOut Earthquake Scenario, a “detailed depiction of a hypothetical magnitude 7.8 earthquake.” The study “served as the centerpiece of the largest earthquake drill in US history, involving over five thousand emergency responders and the participation of over 5.5 million citizens,” the USGS later reported.

That same year, the agency assembled a team of 117 scientists, engineers, public-policy experts, and insurance experts to model what kind of impact a monster storm event would have on modern California.

At the time, Lucy Jones served as the chief scientist for the USGS’s Multi Hazards Demonstration Project, which oversaw both projects. A seismologist by training, Jones spent her time studying the devastations of earthquakes and convincing policy makers to invest resources into preparing for them. The ARkStorm project took her aback, she told me. The first thing she and her team did was ask, What’s the biggest flood in California we know about? “I’m a fourth-generation Californian who studies disaster risk, and I had never heard of the Great Flood of 1862,” she said. “None of us had heard of it,” she added—not even the meteorologists knew about what’s “by far the biggest disaster ever in California and the whole Southwest” over the past two centuries.

At first, the meteorologists were constrained in modeling a realistic megastorm by a lack of data; solid rainfall-gauge measures go back only a century. But after hearing about the 1862 flood, the ARkStorm team dug into research from Ingram and others for information about megastorms before US statehood and European contact. They were shocked to learn that the previous 1,800 years had about six events that were more severe than 1862, along with several more that were roughly of the same magnitude. What they found was that a massive flood is every bit as likely to strike California, and as imminent, as a massive quake.

Even with this information, modeling a massive flood proved more challenging than projecting out a massive earthquake. “We seismologists do this all the time—we create synthetic seismographs,” she said. Want to see what a quake reaching 7.8 on the Richter scale would look like along the San Andreas Fault? Easy, she said. Meteorologists, by contrast, are fixated on accurate prediction of near-future events; “creating a synthetic event wasn’t something they had ever done.” They couldn’t just re-create the 1862 event, because most of the information we have about it is piecemeal, from eyewitness accounts and sediment samples.

To get their heads around how to construct a reasonable approximation of a megastorm, the team’s meteorologists went looking for well-documented 20th-century events that could serve as a model. They settled on two: a series of big storms in 1969 that hit Southern California hardest and a 1986 cluster that did the same to the northern part of the state. To create the ARkStorm scenario, they stitched the two together. Doing so gave the researchers a rich and regionally precise trove of data to sketch out a massive Big One storm scenario.

There was one problem: While the fictional ARkStorm is indeed a massive event, it’s still significantly smaller than the one that caused the Great Flood of 1862. “Our [hypothetical storm] only had total rain for 25 days, while there were 45 days in 1861 to ’62,” Jones said. They plunged ahead anyway, for two reasons. One was that they had robust data on the two 20th-century storm events, giving disaster modelers plenty to work with. The second was that they figured a smaller-than-1862 catastrophe would help build public buy-in, by making the project hard to dismiss as an unrealistic figment of scaremongering bureaucrats.

What they found stunned them—and should stun anyone who relies on California to produce food (not to mention anyone who lives in the state). The headline number: $725 billion in damage, nearly four times what the USGS’s seismology team arrived at for its massive-quake scenario ($200 billion). For comparison, the two most costly natural disasters in modern US history—Hurricane Katrina in 2005 and Harvey in 2017—racked up $166 billion and $130 billion, respectively. The ARkStorm would “flood thousands of square miles of urban and agricultural land, result in thousands of landslides, [and] disrupt lifelines throughout the state for days or weeks,” the study reckoned. Altogether, 25 percent of the state’s buildings would be damaged.

In their model, 25 days of relentless rains overwhelm the Central Valley’s flood-control infrastructure. Then large swaths of the northern part of the Central Valley go under as much as 20 feet of water. The southern part, the San Joaquin Valley, gets off lighter; but a miles-wide band of floodwater collects in the lowest-elevation regions, ballooning out to encompass the expanse that was once the Tulare Lake bottom and stretching to the valley’s southern extreme. Most metropolitan parts of the Bay Area escape severe damage, but swaths of Los Angeles and Orange Counties experience “extensive flooding.”

As Jones stressed to me in our conversation, the ARkStorm scenario is a cautious approximation; a megastorm that matches 1862 or its relatively recent antecedents could plausibly bury the entire Central Valley underwater, northern tip to southern. As the report puts it: “Six megastorms that were more severe than 1861–1862 have occurred in California during the last 1800 years, and there is no reason to believe similar storms won’t occur again.”

A 21st-century megastorm would fall on a region quite different from gold rush–era California. For one thing, it’s much more populous. While the ARkStorm reckoning did not estimate a death toll, it warned of a “substantial loss of life” because “flood depths in some areas could realistically be on the order of 10–20 feet.”

Then there’s the transformation of farming since then. The 1862 storm drowned an estimated 200,000 head of cattle, about a quarter of the state’s entire herd. Today, the Central Valley houses nearly 4 million beef and dairy cows. While cattle continue to be an important part of the region’s farming mix, they no longer dominate it. Today the valley is increasingly given over to intensive almond, pistachio, and grape plantations, representing billions of dollars of investments in crops that take years to establish, are expected to flourish for decades, and could be wiped out by a flood.

Apart from economic losses, “the evolution of a modern society creates new risks from natural disasters,” Jones told me. She cited electric power grids, which didn’t exist in mid-19th-century California. A hundred years ago, when electrification was taking off, extended power outages caused inconveniences. Now, loss of electricity can mean death for vulnerable populations (think hospitals, nursing homes, and prisons). Another example is the intensification of farming. When a few hundred thousand cattle roamed the sparsely populated Central Valley in 1861, their drowning posed relatively limited biohazard risks, although, according to one contemporary account, in post-flood Sacramento, there were a “good many drowned hogs and cattle lying around loose in the streets.”

Today, however, several million cows are packed into massive feedlots in the southern Central Valley, their waste often concentrated in open-air liquid manure lagoons, ready to be swept away and blended into a fecal slurry. Low-lying Tulare County houses nearly 500,000 dairy cows, with 258 operations holding on average 1,800 cattle each. Mature modern dairy cows are massive creatures, weighing around 1,500 pounds each and standing nearly 5 feet tall at the front shoulder. Imagine trying to quickly move such beasts by the thousands out of the path of a flood—and the consequences of failing to do so.

A massive flood could severely pollute soil and groundwater in the Central Valley, and not just from rotting livestock carcasses and millions of tons of concentrated manure. In a 2015 paper, a team of USGS researchers tried to sum up the myriad toxic substances that would be stirred up and spread around by massive storms and floods. The cities of 160 years ago could not boast municipal wastewater facilities, which filter pathogens and pollutants in human sewage, nor municipal dumps, which concentrate often-toxic garbage. In the region’s teeming 21st-century urban areas, those vital sanitation services would become major threats. The report projects that a toxic soup of “petroleum, mercury, asbestos, persistent organic pollutants, molds, and soil-borne or sewage-borne pathogens” would spread across much of the valley, as would concentrated animal manure, fertilizer, pesticides, and other industrial chemicals.

The valley’s southernmost county, Kern, is a case study in the region’s vulnerabilities. Kern’s farmers lead the entire nation in agricultural output by dollar value, annually producing $7 billion worth of foodstuffs like almonds, grapes, citrus, pistachios, and milk. The county houses more than 156,000 dairy cows in facilities averaging 3,200 head each. That frenzy of agricultural production means loads of chemicals on hand; every year, Kern farmers use around 30 million pounds of pesticides, second only to Fresno among California counties. (Altogether, five San Joaquin Valley counties use about half of the more than 200 million pounds of pesticides applied in California.)

Kern is also one of the nation’s most prodigious oil-producing counties. Its vast array of pump jacks, many of them located in farm fields, produce 70 percent of California’s entire oil output. It’s also home to two large oil refineries. If Kern County were a state, it would be the nation’s seventh-leading oil-producing one, churning out twice as much crude as Louisiana. In a massive storm, floodwaters could pick up a substantial amount of highly toxic petroleum and byproducts. Again, in the ARkStorm scenario, Kern County gets hit hard by rain but mostly escapes the worst flooding. The real “Other Big One” might not be so kind, Jones said.

In the end, the USGS team could not estimate the level of damage that will be visited upon the Central Valley’s soil and groundwater from a megaflood: too many variables, too many toxins and biohazards that could be sucked into the vortex. They concluded that “flood-related environmental contamination impacts are expected to be the most widespread and substantial in lowland areas of the Central Valley, the Sacramento–San Joaquin River Delta, the San Francisco Bay area, and portions of the greater Los Angeles metroplex.”

Jones said the initial reaction to the 2011 release of the ARkStorm report among California’s policymakers and emergency managers was skepticism: “Oh, no, that’s too big—it’s impossible,” they would say. “We got lots of traction with the earthquake scenario, and when we did the big flood, nobody wanted to listen to us,” she said.

But after years of patiently informing the state’s decisionmakers that such a disaster is just as likely as a megaquake—and likely much more devastating—the word is getting out. She said the ARkStorm message probably helped prepare emergency managers for the severe storms of February 2017. That month, the massive Oroville Dam in the Sierra Nevada foothills very nearly failed, threatening to send a 30-foot-tall wall of water gushing into the northern Central Valley. As the spillway teetered on the edge of collapse, officials ordered the evacuation of 188,000 people in the communities below. The entire California National Guard was put on notice to mobilize if needed—the first such order since the 1992 Rodney King riots in Los Angeles. Although the dam ultimately held up, the Oroville incident illustrates the challenges of moving hundreds of thousands of people out of harm’s way on short notice.

The evacuation order “unleashed a flood of its own, sending tens of thousands of cars simultaneously onto undersize roads, creating hours-long backups that left residents wondering if they would get to high ground before floodwaters overtook them,” the Sacramento Bee reported. Eight hours after the evacuation, highways were still jammed with slow-moving traffic. A California Highway Patrol spokesman summed up the scene for the Bee:

Unprepared citizens who were running out of gas and their vehicles were becoming disabled in the roadway. People were utilizing the shoulder, driving the wrong way. Traffic collisions were occurring. People fearing for their lives, not abiding by the traffic laws. All combined, it created big problems. It ended up pure, mass chaos.

Even so, Jones said the evacuation went as smoothly as could be expected and likely would have saved thousands of lives if the dam had burst. “But there are some things you can’t prepare for.” Obviously, getting area residents to safety was the first priority, but animal inhabitants were vulnerable, too. If the dam had burst, she said, “I doubt they would have been able to save cattle.”

As the state’s ever-strained emergency-service agencies prepare for the Other Big One, there’s evidence other agencies are struggling to grapple with the likelihood of a megaflood. In the wake of the 2017 near-disaster at Oroville, state agencies spent more than $1 billion repairing the damaged dam and bolstering it for future storms. Just as work was being completed in fall 2018, the Federal Energy Regulatory Commission assessed the situation and found that a “probable maximum flood”—on the scale of the ArkStorm—would likely overwhelm the dam. FERC called on the state to invest in a “more robust and resilient design” to prevent a future cataclysm. The state’s Department of Water Resources responded by launching a “needs assessment” of the dam’s safety that’s due to wrap up in 2020.

Of course, in a state beset by the increasing threat of wildfires in populated areas as well as earthquakes, funds for disaster preparation are tightly stretched. All in all, Jones said, “we’re still much more prepared for a quake than a flood.” Then again, it’s hard to conceive of how we could effectively prevent a 21st century repeat of the Great Flood or how we could fully prepare for the low-lying valley that runs along the center of California like a bathtub—now packed with people, livestock, manure, crops, petrochemicals, and pesticides—to be suddenly transformed into a storm-roiled inland sea.

The aliens among us. How viruses shape the world (The Economist)

They don’t just cause pandemics

Leaders – Aug 22nd 2020 edition

HUMANS THINK of themselves as the world’s apex predators. Hence the silence of sabre-tooth tigers, the absence of moas from New Zealand and the long list of endangered megafauna. But SARSCoV-2 shows how people can also end up as prey. Viruses have caused a litany of modern pandemics, from covid-19, to HIV/AIDS to the influenza outbreak in 1918-20, which killed many more people than the first world war. Before that, the colonisation of the Americas by Europeans was abetted—and perhaps made possible—by epidemics of smallpox, measles and influenza brought unwittingly by the invaders, which annihilated many of the original inhabitants.

The influence of viruses on life on Earth, though, goes far beyond the past and present tragedies of a single species, however pressing they seem. Though the study of viruses began as an investigation into what appeared to be a strange subset of pathogens, recent research puts them at the heart of an explanation of the strategies of genes, both selfish and otherwise.

Viruses are unimaginably varied and ubiquitous. And it is becoming clear just how much they have shaped the evolution of all organisms since the very beginnings of life. In this, they demonstrate the blind, pitiless power of natural selection at its most dramatic. And—for one group of brainy bipedal mammals that viruses helped create—they also present a heady mix of threat and opportunity.

As our essay in this week’s issue explains, viruses are best thought of as packages of genetic material that exploit another organism’s metabolism in order to reproduce. They are parasites of the purest kind: they borrow everything from the host except the genetic code that makes them what they are. They strip down life itself to the bare essentials of information and its replication. If the abundance of viruses is anything to go by, that is a very successful strategy indeed.

The world is teeming with them. One analysis of seawater found 200,000 different viral species, and it was not setting out to be comprehensive. Other research suggests that a single litre of seawater may contain more than 100bn virus particles, and a kilo of dried soil ten times that number. Altogether, according to calculations on the back of a very big envelope, the world might contain 1031 of the things—that is one followed by 31 zeros, far outnumbering all other forms of life on the planet.

As far as anyone can tell, viruses—often of many different sorts—have adapted to attack every organism that exists. One reason they are powerhouses of evolution is that they oversee a relentless and prodigious slaughter, mutating as they do so. This is particularly clear in the oceans, where a fifth of single-celled plankton are killed by viruses every day. Ecologically, this promotes diversity by scything down abundant species, thus making room for rarer ones. The more common an organism, the more likely it is that a local plague of viruses specialised to attack it will develop, and so keep it in check.

This propensity to cause plagues is also a powerful evolutionary stimulus for prey to develop defences, and these defences sometimes have wider consequences. For example, one explanation for why a cell may deliberately destroy itself is if its sacrifice lowers the viral load on closely related cells nearby. That way, its genes, copied in neighbouring cells, are more likely to survive. It so happens that such altruistic suicide is a prerequisite for cells to come together and form complex organisms, such as pea plants, mushrooms and human beings.

The other reason viruses are engines of evolution is that they are transport mechanisms for genetic information. Some viral genomes end up integrated into the cells of their hosts, where they can be passed down to those organisms’ descendants. Between 8% and 25% of the human genome seems to have such viral origins. But the viruses themselves can in turn be hijacked, and their genes turned to new uses. For example, the ability of mammals to bear live young is a consequence of a viral gene being modified to permit the formation of placentas. And even human brains may owe their development in part to the movement within them of virus-like elements that create genetic differences between neurons within a single organism.

Evolution’s most enthralling insight is that breathtaking complexity can emerge from the sustained, implacable and nihilistic competition within and between organisms. The fact that the blind watchmaker has equipped you with the capacity to read and understand these words is in part a response to the actions of swarms of tiny, attacking replicators that have been going on, probably, since life first emerged on Earth around 4bn years ago. It is a startling example of that principle in action—and viruses have not finished yet.

Humanity’s unique, virus-chiselled consciousness opens up new avenues to deal with the viral threat and to exploit it. This starts with the miracle of vaccination, which defends against a pathogenic attack before it is launched. Thanks to vaccines, smallpox is no more, having taken some 300m lives in the 20th century. Polio will one day surely follow. New research prompted by the covid-19 pandemic will enhance the power to examine the viral realm and the best responses to it that bodies can muster—taking the defence against viruses to a new level.

Another avenue for progress lies in the tools for manipulating organisms that will come from an understanding of viruses and the defences against them. Early versions of genetic engineering relied on restriction enzymes—molecular scissors with which bacteria cut up viral genes and which biotechnologists employ to move genes around. The latest iteration of biotechnology, gene editing letter by letter, which is known as CRISPR, makes use of a more precise antiviral mechanism.

From the smallest beginnings

The natural world is not kind. A virus-free existence is an impossibility so deeply unachievable that its desirability is meaningless. In any case, the marvellous diversity of life rests on viruses which, as much as they are a source of death, are also a source of richness and of change. Marvellous, too, is the prospect of a world where viruses become a source of new understanding for humans—and kill fewer of them than ever before. ■

Correction: An earlier version of this article got its maths wrong. 1031 is one followed by 31 zeroes, not ten followed by 31 zeroes as we first wrote. Sorry.

Counting the Lives Saved by Lockdowns—and Lost to Slow Action (The Scientist)

the-scientist.com

David Adan, July 6, 2020

On May 20, disease modelers at Columbia University posted a preprint that concluded the US could have prevented 36,000 of the 65,300 deaths that the country had suffered as a result of COVID-19 by May 3 if states had instituted social distancing measures a week earlier. In early June, Imperial College London epidemiologist Neil Ferguson, one of the UK government’s key advisers in the early stages of the pandemic, came to a similar conclusion about the UK. In evidence he presented to a parliamentary committee inquiry, Ferguson said that if the country had introduced restrictions on movement and socializing a week sooner than it did, Britain’s official death toll of 40,000 could have been halved.

On a more positive note, Ferguson and other researchers at Imperial College London published a model in Nature around the same time estimating that more than 3 million deaths had been avoided in the UK as a result of the policies that were put in place.

These and other studies from recent months aim to understand how well various social-distancing measures have curbed infections, and by extension saved lives. It’s a big challenge to unravel and reliably understand all the factors at play, but experts say the research could help inform future policies. 

The most effective measure, one study found, was getting people not to travel to work, while school closures had relatively little effect.

“It’s not just about looking retrospectively,” Jeffrey Shaman, a data scientist at Columbia University and coauthor of the preprint on US deaths, tells The Scientist. “All the places that have managed to get it under control to a certain extent are still at risk of having a rebound and a flare up. And if they don’t respond to it because they can’t motivate the political and public will to actually reinstitute control measures, then we’re going to repeat the same mistakes.”

Diving into the data

Shaman and his team used a computer model and data on how people moved around to work out how reduced contact between people could explain disease trends after the US introduced social distancing measures in mid-March. Then, the researchers looked at what would have happened if the same measures had been introduced a week earlier, and found that more than half of total infections and deaths up to May 3 would have been prevented. Starting the measures on March 1 would have prevented 83 percent of the nation’s deaths during that period, according to the model. Shaman says he is waiting to submit for publication in a peer-reviewed journal until he and his colleagues update the study with more-recent data. 

“I thought they had reasonably credible data in terms of trying to argue that the lockdowns had prevented infections,” says Daniel Sutter, an economist at Troy University. “They were training or calibrating that model using some cell phone data and foot traffic data and correlating that with lockdowns.”

Sébastien Annan-Phan, an economist at the University of California, Berkeley, undertook a similar analysis, looking at the growth rate of case numbers before and after various lockdown measures were introduced in China, South Korea, Italy, Iran, France, and the US. Because these countries instituted different combinations of social distancing measures, the team was able to estimate how well each action slowed disease spread. The most effective measure, they found, was getting people not to travel to work, while school closures had relatively little effect. “Every country is different and they implement different policies, but we can still tease out a couple of things,” says Annan-Phan.  

In total, his group estimated that combined interventions prevented or delayed about 62 million confirmed cases in the six countries studied, or about 530 million total infections. The results were published in Naturein June alongside a study from a group at Imperial College London, which had compared COVID-19 cases reported in several European countries under lockdown with the worst-case scenario predicted for each of those countries by a computer model in which no such measures were taken. According to that analysis, which assumed that the effects of social distancing measures were the same from country to country, some 3.1 million deaths had been avoided. 

It’s hard to argue against the broad conclusion that changing people’s behavior was beneficial, says Andrew Gelman, a statistician at Columbia University. “If people hadn’t changed their behavior, then it would have been disastrous.” 

Lockdown policies versus personal decisions to isolate

Like all hypothetical scenarios, it’s impossible to know how events would have played out if different decisions were made. And attributing changes in people’s behavior to official lockdown policies during the pandemic is especially difficult, says Gelman. “Ultimately, we can’t say what would have happened without it, because the timing of lockdown measures correlates with when people would have gone into self-isolation anyway.” Indeed, according to a recent study of mobile phone data in the US, many people started to venture out less a good one to four weeks before they were officially asked to. 

A report on data from Sweden, a country that did not introduce the same strict restrictions as others in Europe, seems to support that idea. It found that, compared with data from other countries, Sweden’s outcomes were no worse. “A lockdown would not have helped in terms of limiting COVID-19 infections or deaths in Sweden,” the study originally concluded. But Gernot Müller, an economist at the University of Tubingen who worked on that report, now says updated data show that original conclusion was flawed. Many Swedes took voluntary actions in the first few weeks, he says, and this masked the benefits that a lockdown would have had. But after the first month, the death rate started to rise. “It turns out that we do now see a lockdown effect,” Müller says of his group’s new, still unpublished analyses. “So lockdowns do work and we can attach a number to that: some 40 percent or 50 percent fewer deaths.”

Some critics question the assumption that such deaths have been prevented, rather than simply delayed. While it can appear to be a semantic point, the distinction between preventing and delaying infection is an important one when policymakers assess the costs and benefits of lockdown measures, Sutter says. “I think it’s a little misleading to keep saying these lockdowns have prevented death. They’ve just prevented cases from occurring so far,” he says. “There’s still the underlying vulnerability out there. People are still susceptible to get the virus and get sick at a later date.”

Shaman notes, however, that it’s really a race against the clock. It’s about “buying yourself and your population critical time to not be infected while we try to get our act together to produce an effective vaccine or therapeutic.”

See “It’s So Hard to Know Who’s Dying of COVID-19—and When

See “The Effects of Physical Isolation on the Pandemic Quantified

New model predicts the peaks of the COVID-19 pandemic (Science Daily)

Date: May 29, 2020

Source: Santa Fe Institute

Summary: Researchers describe a single function that accurately describes all existing available data on active COVID-19 cases and deaths — and predicts forthcoming peaks.

As of late May, COVID-19 has killed more than 325,000 people around the world. Even though the worst seems to be over for countries like China and South Korea, public health experts warn that cases and fatalities will continue to surge in many parts of the world. Understanding how the disease evolves can help these countries prepare for an expected uptick in cases.

This week in the journal Frontiers in Physics, researchers describe a single function that accurately describes all existing available data on active cases and deaths — and predicts forthcoming peaks. The tool uses q-statistics, a set of functions and probability distributions developed by Constantino Tsallis, a physicist and member of the Santa Fe Institute’s external faculty. Tsallis worked on the new model together with Ugur Tirnakli, a physicist at Ege University, in Turkey.

“The formula works in all the countries in which we have tested,” says Tsallis.

Neither physicist ever set out to model a global pandemic. But Tsallis says that when he saw the shape of published graphs representing China’s daily active cases, he recognized shapes he’d seen before — namely, in graphs he’d helped produce almost two decades ago to describe the behavior of the stock market.

“The shape was exactly the same,” he says. For the financial data, the function described probabilities of stock exchanges; for COVID-19, it described daily the number of active cases — and fatalities — as a function of time.

Modeling financial data and tracking a global pandemic may seem unrelated, but Tsallis says they have one important thing in common. “They’re both complex systems,” he says, “and in complex systems, this happens all the time.” Disparate systems from a variety of fields — biology, network theory, computer science, mathematics — often reveal patterns that follow the same basic shapes and evolution.

The financial graph appeared in a 2004 volume co-edited by Tsallis and the late Nobelist Murray Gell-Mann. Tsallis developed q-statitics, also known as “Tsallis statistics,” in the late 1980s as a generalization of Boltzmann-Gibbs statistics to complex systems.

In the new paper, Tsallis and Tirnakli used data from China, where the active case rate is thought to have peaked, to set the main parameters for the formula. Then, they applied it to other countries including France, Brazil, and the United Kingdom, and found that it matched the evolution of the active cases and fatality rates over time.

The model, says Tsallis, could be used to create useful tools like an app that updates in real-time with new available data, and can adjust its predictions accordingly. In addition, he thinks that it could be fine-tuned to fit future outbreaks as well.

“The functional form seems to be universal,” he says, “Not just for this virus, but for the next one that might appear as well.”

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.

Journal Reference:

  1. Constantino Tsallis, Ugur Tirnakli. Predicting COVID-19 Peaks Around the World. Frontiers in Physics, 2020; 8 DOI: 10.3389/fphy.2020.00217

Modeling COVID-19 data must be done with extreme care (Science Daily)

Date: May 19, 2020

Source: American Institute of Physics

Summary: At the beginning of a new wave of an epidemic, extreme care should be used when extrapolating data to determine whether lockdowns are necessary, experts say.

As the infectious virus causing the COVID-19 disease began its devastating spread around the globe, an international team of scientists was alarmed by the lack of uniform approaches by various countries’ epidemiologists to respond to it.

Germany, for example, didn’t institute a full lockdown, unlike France and the U.K., and the decision in the U.S. by New York to go into a lockdown came only after the pandemic had reached an advanced stage. Data modeling to predict the numbers of likely infections varied widely by region, from very large to very small numbers, and revealed a high degree of uncertainty.

Davide Faranda, a scientist at the French National Centre for Scientific Research (CNRS), and colleagues in the U.K., Mexico, Denmark, and Japan decided to explore the origins of these uncertainties. This work is deeply personal to Faranda, whose grandfather died of COVID-19; Faranda has dedicated the work to him.

In the journal Chaos, from AIP Publishing, the group describes why modeling and extrapolating the evolution of COVID-19 outbreaks in near real time is an enormous scientific challenge that requires a deep understanding of the nonlinearities underlying the dynamics of epidemics.

Forecasting the behavior of a complex system, such as the evolution of epidemics, requires both a physical model for its evolution and a dataset of infections to initialize the model. To create a model, the team used data provided by Johns Hopkins University’s Center for Systems Science and Engineering, which is available online at https://systems.jhu.edu/research/public-health/ncov/ or https://github.com/CSSEGISandData/COVID-19.

“Our physical model is based on assuming that the total population can be divided into four groups: those who are susceptible to catching the virus, those who have contracted the virus but don’t show any symptoms, those who are infected and, finally, those who recovered or died from the virus,” Faranda said.

To determine how people move from one group to another, it’s necessary to know the infection rate, incubation time and recovery time. Actual infection data can be used to extrapolate the behavior of the epidemic with statistical models.

“Because of the uncertainties in both the parameters involved in the models — infection rate, incubation period and recovery time — and the incompleteness of infections data within different countries, extrapolations could lead to an incredibly large range of uncertain results,” Faranda said. “For example, just assuming an underestimation of the last data in the infection counts of 20% can lead to a change in total infections estimations from few thousands to few millions of individuals.”

The group has also shown that this uncertainty is due to a lack of data quality and also to the intrinsic nature of the dynamics, because it is ultrasensitive to the parameters — especially during the initial growing phase. This means that everyone should be very careful extrapolating key quantities to decide whether to implement lockdown measures when a new wave of the virus begins.

“The total final infection counts as well as the duration of the epidemic are sensitive to the data you put in,” he said.

The team’s model handles uncertainty in a natural way, so they plan to show how modeling of the post-confinement phase can be sensitive to the measures taken.

“Preliminary results show that implementing lockdown measures when infections are in a full exponential growth phase poses serious limitations for their success,” said Faranda.


Story Source:

Materials provided by American Institute of Physics. Note: Content may be edited for style and length.


Journal Reference:

  1. Davide Faranda, Isaac Pérez Castillo, Oliver Hulme, Aglaé Jezequel, Jeroen S. W. Lamb, Yuzuru Sato, Erica L. Thompson. Asymptotic estimates of SARS-CoV-2 infection counts and their sensitivity to stochastic perturbation. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2020; 30 (5): 051107 DOI: 10.1063/5.0008834

Opinion | Forty Years Later, Lessons for the Pandemic From Mount St. Helens (New York Times)

nytimes.com

By Lawrence Roberts – May 17, 2020

The tensions we now face between science, politics and economics also arose before the country’s most destructive volcanic eruption.

Mr. Roberts is a former editor at ProPublica and The Washington Post.

Mount St. Helens erupted on May 18, 1980.
United Press International

When I met David A. Johnston, it was on a spring evening, about a month before he would be erased from existence by a gigantic cloud of volcanic ash boiling over him at 300 miles per hour. He was coming through the door of a makeshift command center in Vancouver, Wash., the closest city to the graceful snow-capped dome of Mount St. Helens, a volcano that had been dormant for 123 years. This was April 1980, and Mr. Johnston, a 30-year-old geologist, was one of the first scientists summoned to monitor new warning signs from the mountain — shallow earthquakes and periodic bursts of ash and steam.

As a young reporter I had talked my way into the command center. At first Mr. Johnston was wary; he wasn’t supposed to meet the press anymore. His supervisors had played down the chance that the smoking mountain was about to explode, and they had already reprimanded him for suggesting otherwise. But on this night he’d just been setting measuring equipment deep in the surrounding forest, and his runner-thin frame vibrated with excitement, his face flushed under his blond beard, and Mr. Johnston couldn’t help riffing on the likelihood of a cataclysmic event.

“My feeling is when it goes, it’s going to go just like that,” he told me, snapping his fingers. “Bang!” At best, he said, we’d have a couple of hours of warning.

Mr. Johnston was mostly right. Early on a Sunday morning several weeks later, the mountain did blow, in the most destructive eruption in U.S. history. But there was no warning. At his instrument outpost, on a ridge more than five miles from the summit, Mr. Johnston had only seconds to radio in a last message: “Vancouver! Vancouver! This is it!”

A photograph of David Johnston, who was killed when Mount St. Helens erupted.
Chris Sweda/Daily Southtown, via Associated Press

Monday, May 18, marks the 40th anniversary of the 1980 Mount St. Helens eruption, and as we now face our own struggle to gauge the uncertain risks presented by nature, to predict how bad things will get and how much and how long to protect ourselves, it may be useful to revisit the tension back then between science, politics and economics.

The drama played out on a much smaller stage — one region of one state, instead of the whole planet — but many of the same elements were present: Scientists provided a range of educated guesses, and public officials split on how to respond. Business owners and residents chafed at the restrictions put in place, many flouted them, and a few even threatened armed rebellion. In the end, the government mostly accepted the analyses of Mr. Johnston and his fellow geologists. As a result, while the eruption killed 57 people and flattened hundreds of square miles of dense Pacific Northwest forestland, the lives of hundreds, perhaps thousands, were spared.

At the first warning signs, state and federal officials moved to distance people from the mountain. They sought to block nonessential visitors from nearby Spirit Lake, ringed with scout camps and tourist lodges. Other than loggers, few people hung around the peak year-round, but the population surged in late spring and summer, when thousands hiked, camped and moved into vacation homes. Many regulars dismissed the risk. Slipping past roadblocks became a popular activity. Locals sold maps to sightseers and amateur photographers that showed how to take old logging roads up the mountain. The owner of a nearby general store shared a common opinion of the threat: “It’s just plain bull. I lived here 26 years, and nothing like this happened before.”

Like the probability of a pandemic, though, it was well-established that one of the dozen or so volcanoes in the 800-mile Cascade Range might soon turn active. Averaging two eruptions a century, they were overdue. A 1978 report by the U.S. Geological Survey, where Mr. Johnston worked, identified Mount St. Helens as most likely to blow next. Yet forecasting how big the event could be was a matter of art as well as science. Geologists could model only previous explosions and list the possible outcomes. (“That position was difficult for many to accept, because they believed we could and should make predictions,” a U.S.G.S. report said later.)

Some scientists suggested a much larger evacuation, but uncertainty, a hallmark of their discipline, can be difficult for those making real-time public policy. The guidelines from federal and state representatives camped out in Vancouver, and from Washington’s governor, Dixy Lee Ray, often seemed in conflict. Moreover, the Weyerhaeuser Company, which owned tens of thousands of acres of timber, opposed logging restrictions, even as some crews got nervous about working near the rumbling dome.

By mid-April, a bulge grew on the north flank, a clue that highly pressurized magma was trapped and expanding. If it burst, a landslide might bury Spirit Lake. The governor, a conservative Democrat who was a biologist by training, finally agreed to stronger measures. She ordered an inner “red zone” where only scientists and law enforcement personnel could enter, and a “blue zone” open to loggers and property owners with day passes. If the zones didn’t extend as far as many geologists hoped, they were certainly an improvement.

Then the mountain got deceptively quiet. The curve of seismic activity flattened and turned downward. Many grew complacent, and restless. On Saturday, May 17, people with property inside the red zone massed in cars and pickup trucks at the roadblock on State Highway 504. Hearing rumors that some carried rifles, the governor relented, allowing them through, with a police escort, to check on their homes and leave again. The state patrol chief, Robert Landon, told them, “We hope the good Lord will keep that mountain from giving us any trouble.” The property owners vowed to return the next day.

The next day was Sunday. At 8:32 a.m., a powerful quake shook loose the snow-covered north face of Mount St. Helens, releasing the superheated magma, which roared out of the mountain in a lateral blast faster than a bullet train, over the spot where Mr. Johnston stood, mowing down 230 square miles of trees, hurling trunks into the air like twigs. It rained down a suffocating storm of thick gray ash, “a burning sky-river wind of searing lava droplet hail,” as the poet Gary Snyder described it. Mudflows clogged the river valleys, setting off deadly floods. A column of ash soared 15 miles high and bloomed into a mushroom cloud 35 miles wide. Over two weeks, ash would circle the globe. Among the 57 dead were three aspiring geologists besides Mr. Johnston, as well as loggers, sightseers and photographers.

About a week later, the Forest Service took reporters up in a helicopter. I had seen the mountain from the air before the eruption. Now the sprawling green wilderness that appeared endless and permanent had disappeared in a blink. We flew for an hour over nothing but moonscape. The scientists had done their best, but nature flexed a power far more deadly than even they had imagined.

Lawrence Roberts, a former editor at ProPublica and The Washington Post, is the author of the forthcoming “Mayday 1971: A White House at War, a Revolt in the Streets, and the Untold History of America’s Biggest Mass Arrest.”

This Is the Future of the Pandemic (New York Times)

Covid-19 isn’t going away soon. Two recent studies mapped out the possible shapes of its trajectory.

Circles at Gare du Nord train station in Paris marked safe social distances on Wednesday.
Circles at Gare du Nord train station in Paris marked safe social distances on Wednesday.Credit…Ian Langsdon/EPA, via Shutterstock

By Siobhan Roberts – May 8, 2020

By now we know — contrary to false predictions — that the novel coronavirus will be with us for a rather long time.

“Exactly how long remains to be seen,” said Marc Lipsitch, an infectious disease epidemiologist at Harvard’s T.H. Chan School of Public Health. “It’s going to be a matter of managing it over months to a couple of years. It’s not a matter of getting past the peak, as some people seem to believe.”

A single round of social distancing — closing schools and workplaces, limiting the sizes of gatherings, lockdowns of varying intensities and durations — will not be sufficient in the long term.

In the interest of managing our expectations and governing ourselves accordingly, it might be helpful, for our pandemic state of mind, to envision this predicament — existentially, at least — as a soliton wave: a wave that just keeps rolling and rolling, carrying on under its own power for a great distance.

The Scottish engineer and naval architect John Scott Russell first spotted a soliton in 1834 as it traveled along the Union Canal. He followed on horseback and, as he wrote in his “Report on Waves,” overtook it rolling along at about eight miles an hour, at thirty feet long and a foot or so in height. “Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel.”

The pandemic wave, similarly, will be with us for the foreseeable future before it diminishes. But, depending on one’s geographic location and the policies in place, it will exhibit variegated dimensions and dynamics traveling through time and space.

“There is an analogy between weather forecasting and disease modeling,” Dr. Lipsitch said. Both, he noted, are simple mathematical descriptions of how a system works: drawing upon physics and chemistry in the case of meteorology; and on behavior, virology and epidemiology in the case of infectious-disease modeling. Of course, he said, “we can’t change the weather.” But we can change the course of the pandemic — with our behavior, by balancing and coordinating psychological, sociological, economic and political factors.

Dr. Lipsitch is a co-author of two recent analyses — one from the Center for Infectious Disease Research and Policy at the University of Minnesota, the other from the Chan School published in Science — that describe a variety of shapes the pandemic wave might take in the coming months.

The Minnesota study describes three possibilities:

Scenario No. 1 depicts an initial wave of cases — the current one — followed by a consistently bumpy ride of “peaks and valleys” that will gradually diminish over a year or two.

Scenario No. 2 supposes that the current wave will be followed by a larger “fall peak,” or perhaps a winter peak, with subsequent smaller waves thereafter, similar to what transpired during the 1918-1919 flu pandemic.

Scenario No. 3 shows an intense spring peak followed by a “slow burn” with less-pronounced ups and downs.

The authors conclude that whichever reality materializes (assuming ongoing mitigation measures, as we await a vaccine), “we must be prepared for at least another 18 to 24 months of significant Covid-19 activity, with hot spots popping up periodically in diverse geographic areas.”

In the Science paper, the Harvard team — infectious-disease epidemiologist Yonatan Grad, his postdoctoral fellow Stephen Kissler, Dr. Lipsitch, his doctoral student Christine Tedijanto and their colleague Edward Goldstein — took a closer look at various scenarios by simulating the transmission dynamics using the latest Covid-19 data and data from related viruses.

The authors conveyed the results in a series of graphs — composed by Dr. Kissler and Ms. Tedijanto — that project a similarly wavy future characterized by peaks and valleys.

One figure from the paper, reinterpreted below, depicts possible scenarios (the details would differ geographically) and shows the red trajectory of Covid-19 infections in response to “intermittent social distancing” regimes represented by the blue bands.

Social distancing is turned “on” when the number of Covid-19 cases reaches a certain prevalence in the population — for instance, 35 cases per 10,000, although the thresholds would be set locally, monitored with widespread testing. It is turned “off” when cases drop to a lower threshold, perhaps 5 cases per 10,000. Because critical cases that require hospitalization lag behind the general prevalence, this strategy aims to prevent the health care system from being overwhelmed.

The green graph represents the corresponding, if very gradual, increase in population immunity.

“The ‘herd immunity threshold’ in the model is 55 percent of the population, or the level of immunity that would be needed for the disease to stop spreading in the population without other measures,” Dr. Kissler said.

Another iteration shows the effects of seasonality — a slower spread of the virus during warmer months. Theoretically, seasonal effects allow for larger intervals between periods of social distancing.

This year, however, the seasonal effects will likely be minimal, since a large proportion of the population will still be susceptible to the virus come summer. And there are other unknowns, since the underlying mechanisms of seasonality — such as temperature, humidity and school schedules — have been studied for some respiratory infections, like influenza, but not for coronaviruses. So, alas, we cannot depend on seasonality alone to stave off another outbreak over the coming summer months.

Yet another scenario takes into account not only seasonality but also a doubling of the critical-care capacity in hospitals. This, in turn, allows for social distancing to kick in at a higher threshold — say, at a prevalence of 70 cases per 10,000 — and for even longer breaks between social distancing periods:

What is clear overall is that a one-time social distancing effort will not be sufficient to control the epidemic in the long term, and that it will take a long time to reach herd immunity.

“This is because when we are successful in doing social distancing — so that we don’t overwhelm the health care system — fewer people get the infection, which is exactly the goal,” said Ms. Tedijanto. “But if infection leads to immunity, successful social distancing also means that more people remain susceptible to the disease. As a result, once we lift the social distancing measures, the virus will quite possibly spread again as easily as it did before the lockdowns.”

So, lacking a vaccine, our pandemic state of mind may persist well into 2021 or 2022 — which surprised even the experts.

“We anticipated a prolonged period of social distancing would be necessary, but didn’t initially realize that it could be this long,” Dr. Kissler said.

Claudio Maierovitch Pessanha Henriques: O mito do pico (Folha de S.Paulo)

www1.folha.uol.com.br

Claudio Maierovitch Pessanha Henriques – 6 de maio de 2020

Desde o início da epidemia de doença causada pelo novo coronavírus (Covid-19), a grande pergunta tem sido “quando acaba?” Frequentemente, são divulgadas na mídia e nas redes sociais projeções as mais variadas sobre a famosa curva da doença em vários países e no mundo, algumas recentes, mostrando a tendência de que os casos deixem de surgir no início do segundo semestre deste ano.

Tais modelos partem do pressuposto de que há uma história, uma curva natural da doença, que começa, sobe, atinge um pico e começa a cair. Vamos analisar o sentido de tal raciocínio. Muitas doenças transmissíveis agudas, quando atingem uma população nova, expandem-se rapidamente, numa velocidade que depende de seu chamado número reprodutivo básico, ou R0 (“R zero”, que estima para quantas pessoas o portador de um agente infeccioso o transmite).

Quando uma quantidade grande de pessoas tiver adoecido ou se infectado mesmo sem sintomas, os contatos entre portadores e pessoas que não tiveram a doença começam a se tornar raros. Num cenário em que pessoas sobreviventes da infecção fiquem imunes àquele agente, sua proporção cresce e a transmissão se torna cada vez mais rara. Assim, a curva, que vinha subindo, fica horizontal e começa a cair, podendo até mesmo chegar a zero, situação em que o agente deixa de circular.

Em populações grandes, é muito raro que uma doença seja completamente eliminada desta forma, por isso a incidência cresce novamente de tempos em tempos. Quando a quantidade de pessoas que não se infectaram, somada à dos bebês que nascem e pessoas sem imunidade que vieram de outros lugares é suficientemente grande, então a curva sobe novamente.

É assim, de forma simplificada, que a ciência entende a ocorrência periódica de epidemias de doenças infecciosas agudas. A história nos ilustra com numerosos exemplos, como varíola, sarampo, gripe, rubéola, poliomielite, caxumba, entre muitos outros. Dependendo das características da doença e da sociedade, são ciclos ilustrados por sofrimento, sequelas e mortes. Realmente, nesses casos, é possível estimar a duração das epidemias e, em alguns casos, até mesmo prever as próximas.

A saúde pública tem diversas ferramentas para interferir em muitos desses casos, indicados para diferentes mecanismos de transmissão, como saneamento, medidas de higiene, isolamento, combate a vetores, uso de preservativos, extinção de fontes de contaminação, vacinas e tratamentos capazes de eliminar os microrganismos. A vacinação, ação específica de saúde considerada mais efetiva, simula o que acontece naturalmente, ao aumentar a quantidade de pessoas imunes na população até que a doença deixe de circular, sem que para isso pessoas precisem adoecer.

No caso da Covid-19, há estimativas de que para a doença deixar de circular intensamente será preciso que cerca de 70% da população seja infectada. Isso se chama imunidade coletiva (também se adota a desagradável denominação “imunidade de rebanho”). Quanto à situação atual de disseminação do coronavírus Sars-CoV-2, a Organização Mundial da Saúde (OMS) calcula que até a metade de abril apenas de 2% a 3% da população mundial terá sido infectada. Estimativas para o Brasil são um pouco inferiores a essa média.

Trocando em miúdos, para que a doença atinja naturalmente seu pico no país e comece a cair, será preciso esperar que 140 milhões de pessoas se infectem. A mais conservadora (menor) taxa de letalidade encontrada nas publicações sobre a Covid-19 é de 0,36%, mais ou menos um vigésimo daquela que os números oficiais de casos e mortes revelam. Isso significa que até o Brasil atingir o pico, contaremos 500 mil mortes se o sistema de saúde não ultrapassar seus limites —e, caso isso aconteça, um número muito maior.

Atingir o pico é sinônimo de catástrofe, não é uma aposta admissível, sobretudo quando constatamos que já está esgotada a capacidade de atendimento hospitalar em várias cidades, como Manaus, Rio de Janeiro e Fortaleza —outras seguem o mesmo caminho.

A única perspectiva aceitável é evitar o pico, e a única forma de fazê-lo é com medidas rigorosas de afastamento físico. A cota de contatos entre as pessoas deve ficar reservada às atividades essenciais, entre elas saúde, segurança, cadeias de suprimento de combustíveis, alimentos, produtos de limpeza, materiais e equipamentos de uso em saúde, limpeza, manutenção e mais um ou outro setor. Alguma dose de criatividade pode permitir ampliar um pouco esse leque, desde que os meios de transporte e vias públicas permaneçam vazios o suficiente para que seja mantida a distância mínima entre as pessoas.

O monitoramento do número de casos e mortes, que revela a transmissão com duas a três semanas de defasagem, deverá ser aprimorado e utilizado em conjunto com estudos baseados em testes laboratoriais para indicar o rigor das medidas de isolamento.

Se conseguirmos evitar a tragédia maior, vamos conviver com um longo período de restrição de atividades, mais de um ano, e teremos que aprender a organizar a vida e a economia de outras formas, além de passar por alguns períodos de “lockdown” —cerca de duas semanas cada, se a curva apontar novamente para o pico.

Hoje, a situação é grave e tende a se tornar crítica. O Brasil é o país com a maior taxa de transmissão da doença; é hora de ficar em casa e, se for imprescindível sair, fazer da máscara uma parte inseparável da vestimenta e manter rigorosamente todos os cuidados indicados.​

Not quite all there. The 90% economy that lockdowns will leave behind (The Economist)

It will not just be smaller, it will feel strange

BriefingApr 30th 2020 edition

Apr 30th 2020

Editor’s note: The Economist is making some of its most important coverage of the covid-19 pandemic freely available to readers of The Economist Today, our daily newsletter. To receive it, register here. For our coronavirus tracker and more coverage, see our hub

IN THE 1970s Mori Masahiro, a professor at the Tokyo Institute of Technology, observed that there was something disturbing about robots which looked almost, but not quite, like people. Representations in this “uncanny valley” are close enough to lifelike for their shortfalls and divergences from the familiar to be particularly disconcerting. Today’s Chinese economy is exploring a similarly unnerving new terrain. And the rest of the world is following in its uncertain steps.

Whatever the drawbacks of these new lowlands, they are assuredly preferable to the abyss of lockdown. Measures taken to reverse the trajectory of the pandemic around the world have brought with them remarkable economic losses.

Not all sectors of the economy have done terribly. New subscriptions to Netflix increased at twice their usual rate in the first quarter of 2020, with most of that growth coming in March. In America, the sudden stop of revenue from Uber’s ride-sharing service in March and April has been partially cushioned by the 25% increase of sales from its food-delivery unit, according to 7Park Data, a data provider.

Yet the general pattern is grim. Data from Womply, a firm which processes transactions on behalf of 450,000 small businesses across America, show that businesses in all sectors have lost substantial revenue. Restaurants, bars and recreational businesses have been badly hit: revenues have declined some two-thirds since March 15th. Travel and tourism may suffer the worst losses. In the EU, where tourism accounts for some 4% of GDP, the number of people travelling by plane fell from 5m to 50,000; on April 19th less than 5% of hotel rooms in Italy and Spain were occupied.

According to calculations made on behalf of The Economist by Now-Casting Economics, a research firm that provides high-frequency economic forecasts to institutional investors, the world economy shrank by 1.3% year-on-year in the first quarter of 2020, driven by a 6.8% year-on-year decline in China’s GDP. The Federal Reserve Bank of New York draws on measures such as jobless claims to produce a weekly index of American economic output. It suggests that the country’s GDP is currently running about 12% lower than it was a year ago (see chart 1).

These figures fit with attempts by Goldman Sachs, a bank, to estimate the relationship between the severity of lockdowns and their effect on output. It finds, roughly, that an Italian-style lockdown is associated with a GDP decline of 25%. Measures to control the virus while either keeping the economy running reasonably smoothly, as in South Korea, or reopening it, as in China, are associated with a GDP reduction in the region of 10%. That chimes with data which suggest that if Americans chose to avoid person-to-person proximity of the length of an arm or less, occupations worth approximately 10% of national output would become unviable.

The “90% economy” thus created will be, by definition, smaller than that which came before. But its strangeness will be more than a matter of size. There will undoubtedly be relief, fellow feeling, and newly felt or expressed esteem for those who have worked to keep people safe. But there will also be residual fear, pervasive uncertainty, a lack of innovative fervour and deepened inequalities. The fraction of life that is missing will colour people’s experience and behaviour in ways that will not be offset by the happy fact that most of what matters is still available and ticking over. In a world where the office is open but the pub is not, qualitative differences in the way life feels will be at least as significant as the drop in output.

The plight of the pub demonstrates that the 90% economy will not be something that can be fixed by fiat. Allowing pubs—and other places of social pleasure—to open counts for little if people do not want to visit them. Many people will have to leave the home in order to work, but they may well feel less comfortable doing so to have a good time. A poll by YouGov on behalf of The Economist finds that over a third of Americans think it will be “several months” before it will be safe to reopen businesses as normal—which suggests that if businesses do reopen some, at least, may stay away.

Ain’t nothing but tired

Some indication that the spending effects of a lockdown will persist even after it is over comes from Sweden. Research by Niels Johannesen of Copenhagen University and colleagues finds that aggregate-spending patterns in Sweden and Denmark over the past months look similarly reduced, even though Denmark has had a pretty strict lockdown while official Swedish provisions have been exceptionally relaxed. This suggests that personal choice, rather than government policy, is the biggest factor behind the drop. And personal choices may be harder to reverse.

Discretionary spending by Chinese consumers—the sort that goes on things economists do not see as essentials—is 40% off its level a year ago. Haidilao, a hotpot chain, is seeing a bit more than three parties per table per day—an improvement, but still lower than the 4.8 registered last year, according to a report by Goldman Sachs published in mid-April. Breweries are selling 40% less beer. STR, a data-analytics firm, finds that just one-third of hotel beds in China were occupied during the week ending April 19th. Flights remain far from full (see chart 2).

This less social world is not necessarily bad news for every company. UBS, a bank, reports that a growing number of people in China say that the virus has increased their desire to buy a car—presumably in order to avoid the risk of infection on public transport. The number of passengers on Chinese underground trains is still about a third below last year’s level; surface traffic congestion is as bad now as it was then.

Wanting a car, though, will not mean being able to afford one. Drops in discretionary spending are not entirely driven by a residual desire for isolation. They also reflect the fact that some people have a lot less money in the post-lockdown world. Not all those who have lost jobs will quickly find new ones, not least because there is little demand for labour-intensive services such as leisure and hospitality. Even those in jobs will not feel secure, the Chinese experience suggests. Since late March the share of people worried about salary cuts has risen slightly, to 44%, making it their biggest concern for 2020, according to Morgan Stanley, a bank. Many are now recouping the loss of income that they suffered during the most acute phase of the crisis, or paying down debt. All this points to high saving rates in the future, reinforcing low consumption.

A 90% economy is, on one level, an astonishing achievement. Had the pandemic struck even two decades ago, only a tiny minority of people would have been able to work or satisfy their needs. Watching a performance of Beethoven on a computer, or eating a meal from a favourite restaurant at home, is not the same as the real thing—but it is not bad. The lifting of the most stringent lockdowns will also provide respite, both emotionally and physically, since the mere experience of being told what you can and cannot do is unpleasant. Yet in three main ways a 90% economy is a big step down from what came before the pandemic. It will be more fragile; it will be less innovative; and it will be more unfair.

Take fragility first. The return to a semblance of normality could be fleeting. Areas which had apparently controlled the spread of the virus, including Singapore and northern Japan, have imposed or reimposed tough restrictions in response to a rise in the growth rate of new infections. If countries which retain relatively tough social-distancing rules do better at staving off a viral comeback, other countries may feel a need to follow them (see Chaguan). With rules in flux, it will feel hard to plan weeks ahead, let alone months.

Can’t start a fire

The behaviour of the economy will be far less predictable. No one really knows for how long firms facing zero revenues, or households who are working reduced hours or not at all, will be able to survive financially. Businesses can keep going temporarily, either by burning cash or by tapping grants and credit lines set up by government—but these are unlimited neither in size nor duration. What is more, a merely illiquid firm can quickly become a truly insolvent one as its earnings stagnate while its debt commitments expand. A rise in corporate and personal bankruptcies, long after the apparently acute phase of the pandemic, seems likely, though governments are trying to forestall them. In the past fortnight bankruptcies in China started to rise relative to last year. On April 28th HSBC, one of the world’s largest banks, reported worse-than-expected results, in part because of higher credit losses.

Furthermore, the pandemic has upended norms and conventions about how economic agents behave. In Britain the share of commercial tenants who paid their rent on time fell from 90% to 60% in the first quarter of this year. A growing number of American renters are no longer paying their landlords. Other creditors are being put off, too. In America, close to 40% of business-to-business payments from firms in the spectator-sports and film industries were late in March, double the rate a year ago. Enforcing contracts has become more difficult with many courts closed and social interactions at a standstill. This is perhaps the most insidious means by which weak sectors of the economy will infect otherwise moderately healthy ones.

In an environment of uncertain property rights and unknowable income streams, potential investment projects are not just risky—they are impossible to price. A recent paper by Scott Baker of Northwestern University and colleagues suggests that economic uncertainty is at an all-time high. That may go some way to explaining the results of a weekly survey from Moody’s Analytics, a research firm, which finds that businesses’ investment intentions are substantially lower even than during the financial crisis of 2007-09. An index which measures American nonresidential construction activity 9-12 months ahead has also hit new lows.

The collapse in investment points to the second trait of the 90% economy: that it will be less innovative. The development of liberal capitalism over the past three centuries went hand in hand with a growth in the number of people exchanging ideas in public or quasi-public spaces. Access to the coffeehouse, the salon or the street protest was always a partial process, favouring some people over others. But a vibrant public sphere fosters creativity.

Innovation is not impossible in a world with less social contact. There is more than one company founded in a garage now worth $1trn. During lockdowns, companies have had to innovate quickly—just look at how many firms have turned their hand to making ventilators, if with mixed success. A handful of firms claim that working from home is so productive that their offices will stay closed for good.

Yet these productivity bonuses look likely to be heavily outweighed by drawbacks. Studies suggest the benefits of working from home only materialise if employees can frequently check in at an office in order to solve problems. Planning new projects is especially difficult. Anyone who has tried to bounce ideas around on Zoom or Skype knows that spontaneity is hard. People are often using bad equipment with poor connections. Nick Bloom of Stanford University, one of the few economists to have studied working from home closely, reckons that there will be a sharp decline in patent applications in 2021.

Cities have proven particularly fertile ground for innovations which drive long-run growth. If Geoffrey West, a physicist who studies complex systems, is right to suggest that doubling a city’s population leads to all concerned becoming on aggregate 15% richer, then the emptying-out of urban areas is bad news. MoveBuddha, a relocation website, says that searches for places in New York City’s suburbs are up almost 250% compared with this time last year. A paper from New York University suggests that richer, and thus presumably more educated, New Yorkers—people from whom a disproportionate share of ideas may flow—are particularly likely to have left during the epidemic.

Something happening somewhere

Wherever or however people end up working, the experience of living in a pandemic is not conducive to creative thought. How many people entered lockdown with a determination to immerse themselves in Proust or George Eliot, only to find themselves slumped in front of “Tiger King”? When mental capacity is taken up by worries about whether or not to touch that door handle or whether or not to believe the results of the latest study on the virus, focusing is difficult. Women are more likely to take care of home-schooling and entertainment of bored children (see article), meaning their careers suffer more than men’s. Already, research by Tatyana Deryugina, Olga Shurchkov and Jenna Stearns, three economists, finds that the productivity of female economists, as measured by production of research papers, has fallen relative to male ones since the pandemic began.

The growing gender divide in productivity points to the final big problem with the 90% economy: that it is unfair. Liberally regulated economies operating at full capacity tend to have unemployment rates of 4-5%, in part because there will always be people temporarily unemployed as they move from one job to another. The new normal will have higher joblessness. This is not just because GDP will be lower; the decline in output will be particularly concentrated in labour-intensive industries such as leisure and hospitality, reducing employment disproportionately. America’s current unemployment rate, real-time data suggest, is between 15-20%.

The lost jobs tended to pay badly, and were more likely to be performed by the young, women and immigrants. Research by Abi Adams-Prassl of Oxford University and colleagues finds that an American who normally earns less than $20,000 a year is twice as likely to have lost their job due to the pandemic as one earning $80,000-plus. Many of those unlucky people do not have the skills, nor the technology, that would enable them to work from home or to retrain for other jobs.

The longer the 90% economy endures, the more such inequalities will deepen. People who already enjoy strong professional networks—largely, those of middle age and higher—may actually quite enjoy the experience of working from home. Notwithstanding the problems of bad internet and irritating children, it may be quite pleasant to chair fewer meetings or performance reviews. Junior folk, even if they make it into an office, will miss out on the expertise and guidance of their seniors. Others with poor professional networks, such as the young or recently arrived immigrants, may find it difficult or impossible to strengthen them, hindering upward mobility, points out Tyler Cowen of George Mason University.

The world economy that went into retreat in March as covid-19 threatened lives was one that looked sound and strong. And the biomedical community is currently working overtime to produce a vaccine that will allow the world to be restored to its full capacity. But estimates suggest that this will take at least another 12 months—and, as with the prospects of the global economy, that figure is highly uncertain. If the adage that it takes two months to form a habit holds, the economy that re-emerges will be fundamentally different.