Arquivo da tag: Incerteza

The professionals who predict the future for a living (MIT Technology Review)

technologyreview.com

Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?

Bobbie Johnson

February 26, 2020


Inez Fung

Professor of atmospheric science, University of California, Berkeley

Inez Fung
Leah Fasten

Prediction for 2030: We’ll light up the world… safely

I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”

I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.

Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of  historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.

I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.

Anne-Lise Kjaer
Dvora Photography

Anne Lise Kjaer

Futurist, Kjaer Global, London

Prediction for 2030: Adults will learn to grasp new ideas

As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.

When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.

A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.

So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?

What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?

Philip Tetlock
Courtesy Photo

Philip Tetlock

Coauthor of Superforecasting and professor, University of Pennsylvania

Prediction for 2030: We’ll get better at being uncertain

At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstractionbut from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.

Forecasts have the potential to be either self-fulfilling or self-negatingY2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?

What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.

Keith Chen
Ryan Young

Keith Chen

Associate professor of economics, UCLA

Prediction for 2030: We’ll be more—and less—private

When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.

These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?

I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.

Annalee Newitz
Sarah Deragon

Annalee Newitz

Science fiction and nonfiction author, San Francisco

Prediction for 2030: We’re going to see a lot more humble technology

Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.

Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.

There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.

It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.

I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.

We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.

Finale Doshi-Velez
Noah Willman

Finale Doshi-Velez

Associate professor of computer science, Harvard

Prediction for 2030: Humans and machines will make decisions together

In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.

Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.

There’s been about a decade of work trying to get unsupervised machine-­learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.

We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.

I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.

All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.

Abdoulaye Banire Diallo
Guillaume Simoneau

Abdoulaye Banire Diallo

Professor, director of the bioinformatics lab, University of Quebec at Montreal

Prediction for 2030: Machine-based forecasting will be regulated

When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecastingto help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.

With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.

Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.

This story was part of our March 2020 issue.

The predictions issue

Ten million reasons to vaccinate the world (The Economist)

economist.com

Our model reveals the true course of the pandemic. Here is what to do next

May 15th 2021 8-10 minutos


THIS WEEK we publish our estimate of the true death toll from covid-19. It tells the real story of the pandemic. But it also contains an urgent warning. Unless vaccine supplies reach poorer countries, the tragic scenes now unfolding in India risk being repeated elsewhere. Millions more will die.

Using known data on 121 variables, from recorded deaths to demography, we have built a pattern of correlations that lets us fill in gaps where numbers are lacking. Our model suggests that covid-19 has already claimed 7.1m-12.7m lives. Our central estimate is that 10m people have died who would otherwise be living. This tally of “excess deaths” is over three times the official count, which nevertheless is the basis for most statistics on the disease, including fatality rates and cross-country comparisons.

The most important insight from our work is that covid-19 has been harder on the poor than anyone knew. Official figures suggest that the pandemic has struck in waves, and that the United States and Europe have been hit hard. Although South America has been ravaged, the rest of the developing world seemed to get off lightly.

Our modelling tells another story. When you count all the bodies, you see that the pandemic has spread remorselessly from the rich, connected world to poorer, more isolated places. As it has done so, the global daily death rate has climbed steeply.

Death rates have been very high in some rich countries, but the overwhelming majority of the 6.7m or so deaths that nobody counted were in poor and middle-income ones. In Romania and Iran excess deaths are more than double the number officially put down to covid-19. In Egypt they are 13 times as big. In America the difference is 7.1%.

India, where about 20,000 are dying every day, is not an outlier. Our figures suggest that, in terms of deaths as a share of population, Peru’s pandemic has been 2.5 times worse than India’s. The disease is working its way through Nepal and Pakistan. Infectious variants spread faster and, because of the tyranny of exponential growth, overwhelm health-care systems and fill mortuaries even if the virus is no more lethal.

Ultimately the way to stop this is vaccination. As an example of collaboration and pioneering science, covid-19 vaccines rank with the Apollo space programme. Within just a year of the virus being discovered, people could be protected from severe disease and death. Hundreds of millions of them have benefited.

However, in the short run vaccines will fuel the divide between rich and poor. Soon, the only people to die from covid-19 in rich countries will be exceptionally frail or exceptionally unlucky, as well as those who have spurned the chance to be vaccinated. In poorer countries, by contrast, most people will have no choice. They will remain unprotected for many months or years.

The world cannot rest while people perish for want of a jab costing as little as $4 for a two-dose course. It is hard to think of a better use of resources than vaccination. Economists’ central estimate for the direct value of a course is $2,900—if you include factors like long covid and the effect of impaired education, the total is much bigger. The benefit from an extra 1bn doses supplied by July would be worth hundreds of billions of dollars. Less circulating virus means less mutation, and so a lower chance of a new variant that reinfects the vaccinated.

Supplies of vaccines are already growing. By the end of April, according to Airfinity, an analytics firm, vaccine-makers produced 1.7bn doses, 700m more than the end of March and ten times more than January. Before the pandemic, annual global vaccine capacity was roughly 3.5bn doses. The latest estimates are that total output in 2021 will be almost 11bn. Some in the industry predict a global surplus in 2022.

And yet the world is right to strive to get more doses in more arms sooner. Hence President Joe Biden has proposed waiving intellectual-property claims on covid-19 vaccines. Many experts argue that, because some manufacturing capacity is going begging, millions more doses might become available if patent-owners shared their secrets, including in countries that today are at the back of the queue. World-trade rules allow for a waiver. When invoke them if not in the throes of a pandemic?

We believe that Mr Biden is wrong. A waiver may signal that his administration cares about the world, but it is at best an empty gesture and at worst a cynical one.

A waiver will do nothing to fill the urgent shortfall of doses in 2021. The head of the World Trade Organisation, the forum where it will be thrashed out, warns there may be no vote until December. Technology transfer would take six months or so to complete even if it started today. With the new mRNA vaccines made by Pfizer and Moderna, it may take longer. Supposing the tech transfer was faster than that, experienced vaccine-makers would be unavailable for hire and makers could not obtain inputs from suppliers whose order books are already bursting. Pfizer’s vaccine requires 280 inputs from suppliers in 19 countries. No firm can recreate that in a hurry.

In any case, vaccine-makers do not appear to be hoarding their technology—otherwise output would not be increasing so fast. They have struck 214 technology-transfer agreements, an unprecedented number. They are not price-gouging: money is not the constraint on vaccination. Poor countries are not being priced out of the market: their vaccines are coming through COVAX, a global distribution scheme funded by donors.

In the longer term, the effect of a waiver is unpredictable. Perhaps it will indeed lead to technology being transferred to poor countries; more likely, though, it will cause harm by disrupting supply chains, wasting resources and, ultimately, deterring innovation. Whatever the case, if vaccines are nearing a surplus in 2022, the cavalry will arrive too late.

A needle in time

If Mr Biden really wants to make a difference, he can donate vaccine right now through COVAX. Rich countries over-ordered because they did not know which vaccines would work. Britain has ordered more than nine doses for each adult, Canada more than 13. These will be urgently needed elsewhere. It is wrong to put teenagers, who have a minuscule risk of dying from covid-19, before the elderly and health-care workers in poor countries. The rich world should not stockpile boosters to cover the population many times over on the off-chance that they may be needed. In the next six months, this could yield billions of doses of vaccine.

Countries can also improve supply chains. The Serum Institute, an Indian vaccine-maker, has struggled to get parts such as filters from America because exports were gummed up by the Defence Production Act (DPA), which puts suppliers on a war-footing. Mr Biden authorised a one-off release, but he should be focusing the DPA on supplying the world instead. And better use needs to be made of finished vaccine. In some poor countries, vaccine languishes unused because of hesitancy and chaotic organisation. It makes sense to prioritise getting one shot into every vulnerable arm, before setting about the second.

Our model is not predictive. However it does suggest that some parts of the world are particularly vulnerable—one example is South-East Asia, home to over 650m people, which has so far been spared mass fatalities for no obvious reason. Covid-19 has not yet run its course. But vaccines have created the chance to save millions of lives. The world must not squander it. ■

Dig deeper

All our stories relating to the pandemic and the vaccines can be found on our coronavirus hub. You can also listen to The Jab, our podcast on the race between injections and infections, and find trackers showing the global roll-out of vaccines, excess deaths by country and the virus’s spread across Europe and America.

This article appeared in the Leaders section of the print edition under the headline “Vaccinating the world”

Has the Era of Overzealous Cleaning Finally Come to an End? (New York Times)

nytimes.com

Emily Anthes, April 8, 2021


This week, the C.D.C. acknowledged what scientists have been saying for months: The risk of catching the coronavirus from surfaces is low.
A hotel room in Long Beach, Wash., being fogged with sanitizer. “There’s really no evidence that anyone has ever gotten Covid-19 by touching a contaminated surface,” one researcher noted.
Credit: Celeste Noche for The New York Times

April 8, 2021

When the coronavirus began to spread in the United States last spring, many experts warned of the danger posed by surfaces. Researchers reported that the virus could survive for days on plastic or stainless steel, and the Centers for Disease Control and Prevention advised that if someone touched one of these contaminated surfaces — and then touched their eyes, nose or mouth — they could become infected.

Americans responded in kind, wiping down groceries, quarantining mail and clearing drugstore shelves of Clorox wipes. Facebook closed two of its offices for a “deep cleaning.” New York’s Metropolitan Transportation Authority began disinfecting subway cars every night.

But the era of “hygiene theater” may have come to an unofficial end this week, when the C.D.C. updated its surface cleaning guidelines and noted that the risk of contracting the virus from touching a contaminated surface was less than 1 in 10,000.

“People can be affected with the virus that causes Covid-19 through contact with contaminated surfaces and objects,” Dr. Rochelle Walensky, the director of the C.D.C., said at a White House briefing on Monday. “However, evidence has demonstrated that the risk by this route of infection of transmission is actually low.”

The admission is long overdue, scientists say.

“Finally,” said Linsey Marr, an expert on airborne viruses at Virginia Tech. “We’ve known this for a long time and yet people are still focusing so much on surface cleaning.” She added, “There’s really no evidence that anyone has ever gotten Covid-19 by touching a contaminated surface.”

During the early days of the pandemic, many experts believed that the virus spread primarily through large respiratory droplets. These droplets are too heavy to travel long distances through the air but can fall onto objects and surfaces.

In this context, a focus on scrubbing down every surface seemed to make sense. “Surface cleaning is more familiar,” Dr. Marr said. “We know how to do it. You can see people doing it, you see the clean surface. And so I think it makes people feel safer.”

A “sanitization specialist” at an Applebee’s Grill and Bar in Westbury, N.Y., wiping down a used pen last year. Restaurants and other businesses have highlighted extra cleaning in their marketing since the pandemic began.
Credit: Hiroko Masuike/The New York Times

But over the last year, it has become increasingly clear that the virus spreads primarily through the air — in both large and small droplets, which can remain aloft longer — and that scouring door handles and subway seats does little to keep people safe.

“The scientific basis for all this concern about surfaces is very slim — slim to none,” said Emanuel Goldman, a microbiologist at Rutgers University, who wrote last summer that the risk of surface transmission had been overblown. “This is a virus you get by breathing. It’s not a virus you get by touching.”

The C.D.C. has previously acknowledged that surfaces are not the primary way that the virus spreads. But the agency’s statements this week went further.

“The most important part of this update is that they’re clearly communicating to the public the correct, low risk from surfaces, which is not a message that has been clearly communicated for the past year,” said Joseph Allen, a building safety expert at the Harvard T.H. Chan School of Public Health.

Catching the virus from surfaces remains theoretically possible, he noted. But it requires many things to go wrong: a lot of fresh, infectious viral particles to be deposited on a surface, and then for a relatively large quantity of them to be quickly transferred to someone’s hand and then to their face. “Presence on a surface does not equal risk,” Dr. Allen said.

In most cases, cleaning with simple soap and water — in addition to hand-washing and mask-wearing — is enough to keep the odds of surface transmission low, the C.D.C.’s updated cleaning guidelines say. In most everyday scenarios and environments, people do not need to use chemical disinfectants, the agency notes.

“What this does very usefully, I think, is tell us what we don’t need to do,” said Donald Milton, an aerosol scientist at the University of Maryland. “Doing a lot of spraying and misting of chemicals isn’t helpful.”

Still, the guidelines do suggest that if someone who has Covid-19 has been in a particular space within the last day, the area should be both cleaned and disinfected.

“Disinfection is only recommended in indoor settings — schools and homes — where there has been a suspected or confirmed case of Covid-19 within the last 24 hours,” Dr. Walensky said during the White House briefing. “Also, in most cases, fogging, fumigation and wide-area or electrostatic spraying is not recommended as a primary method of disinfection and has several safety risks to consider.”

And the new cleaning guidelines do not apply to health care facilities, which may require more intensive cleaning and disinfection.

Saskia Popescu, an infectious disease epidemiologist at George Mason University, said that she was happy to see the new guidance, which “reflects our evolving data on transmission throughout the pandemic.”

But she noted that it remained important to continue doing some regular cleaning — and maintaining good hand-washing practices — to reduce the risk of contracting not just the coronavirus but any other pathogens that might be lingering on a particular surface.

Dr. Allen said that the school and business officials he has spoken with this week expressed relief over the updated guidelines, which will allow them to pull back on some of their intensive cleaning regimens. “This frees up a lot of organizations to spend that money better,” he said.

Schools, businesses and other institutions that want to keep people safe should shift their attention from surfaces to air quality, he said, and invest in improved ventilation and filtration.

“This should be the end of deep cleaning,” Dr. Allen said, noting that the misplaced focus on surfaces has had real costs. “It has led to closed playgrounds, it has led to taking nets off basketball courts, it has led to quarantining books in the library. It has led to entire missed school days for deep cleaning. It has led to not being able to share a pencil. So that’s all that hygiene theater, and it’s a direct result of not properly classifying surface transmission as low risk.”

Roni Caryn Rabin contributed reporting.

Michael E. Mann: “My Comments on New National Academy Report on Geoengineering”

By Michael E. Mann on Thursday, March 25, 2021 – 12:26

Original text

The U.S. National Academy of Sciences has published a new report (“Reflecting Sunlight“) on the topic of Geoengineering (that is, the deliberate manipulation of the global Earth environment in an effort to offset the effects of human carbon pollution-caused climate change). While I am, in full disclosure, a member of the Academy, I offer the following comments in an entirely independent capacity:

Let me start by congratulating the authors on their comprehensive assessment of the science. It is solid as we would expect, since the author team and reviewers cover that well in their expertise. The science underlying geoengineering is the true remit of the study. Chris Field , the lead author, is a duly qualified person to lead the effort, and did a good job making sure that intricacies of the science are covered, including the substantial uncertainties and caveats when it comes to the potential environmental impacts of some of the riskier geoengineering strategies (i.e. stratosphere sulphate aerosol injection to block out sunlight).

I like the fact that there is a discussion of the importance of labels and terminology and how this can impact public perception. For example, the oft-used term “solar radiation management” is not favored by the report authors, as it can be misleading (we don’t have our hand on a dial that controls solar output). On the other hand, I think that the term they do chose to use “solar geoengineering”, is still potentially problematic, because it still implies we’re directly modify solar output—but that’s not the case. We’re talking about messing with Earth’s atmospheric chemistry, we’re not dialing down the sun, even though many of the modeling experiments assume that’s what we’re doing. It’s a bit of a bait and switch. Even the title of the report, “Reflecting Sunlight” falls victim to this biased framing.

In my recent book (“The New Climate War”), I quote one leading scientist on this:

“They don’t actually put aerosols in the atmosphere. They turn down the Sun to mimic geoengineering. You might think that is relatively unimportant . . . [but] controlling the Sun is effectively a perfect knob. We know almost precisely how a reduction in solar flux will project onto the energy balance of a planet. Aerosol-climate interactions are much more complex.”

I have a deeper and more substantive concern though, and it really is about the entire framing of the report. A report like this is as much about the policy message it conveys as it is about the scientific assessment, for it will be used immediately by policy advocates. And here I’m honestly troubled at the fodder it provides for mis-framing of the risks.

I recognize that the authors are dealing with a contentious and still much-debated topic, and it’s a challenge to represent the full range of views within the community, but the opening of the report itself, in my view, really puts a thumb on the scales. It falls victim to the moral hazard that I warn about in “The New Climate War” when it states, as justification for potentially considering implementing these geoengineering schemes:

But despite overwhelming evidence that the climate crisis is real and pressing, emissions of greenhouse gases continue to increase, with global emissions of fossil carbon dioxide rising 10.8 percent from 2010 through 2019. The total for 2020 is on track to decrease in response to decreased economic activity related to the COVID-19 pandemic. The pandemic is thus providing frustrating confirmation of the fact that the world has made little progress in separating economic activity from carbon dioxide emissions.

First of all, the discussion of carbon emissions reductions there is misleading. Emissions flattened in the years before the pandemic, and the International Energy Agency (IEA) specifically attributed that flattening to a decrease in carbon emissions globally in the power generation sector. These reductions continue on and contributed at least party to the 7% decrease in global emissions last year. We will certainly need policy interventions favoring further decarbonization to maintain that level of decrease year after year, but if we can do that, we remain on a path to limiting warming below dangerous levels (decent chance less than 1.5C and very good chance less than 2C) without resorting on very risky geoengineering schemes. It is a matter of political willpower, not technology–we have the technology now necessary to decarbonize our economy.

The authors are basically arguing that because carbon reductions haven’t been great enough (thanks to successful opposition by polluters and their advocates) we should consider geoengineering. That framing (unintentionally, I realize) provides precisely the crutch that polluters are looking for.

As I explain in the book:

A fundamental problem with geoengineering is that it presents what is known as a moral hazard, namely, a scenario in which one party (e.g., the fossil fuel industry) promotes actions that are risky for another party (e.g., the rest of us), but seemingly advantageous to itself. Geoengineering provides a potential crutch for beneficiaries of our continued dependence on fossil fuels. Why threaten our economy with draconian regulations on carbon when we have a cheap alternative? The two main problems with that argument are that (1) climate change poses a far greater threat to our economy than decarbonization, and (2) geoengineering is hardly cheap—it comes with great potential harm.

So, in short, this report is somewhat of a mixed bag. The scientific assessment and discussion is solid, and there is a discussion of uncertainties and caveats in the detailed report. But the spin in the opening falls victim to moral hazard and will provide fodder for geoengineering advocates to use in leveraging policy decision-making.

I am somewhat troubled by that.

Opinion: Bill Gates and Warren Buffett should thank American taxpayers for their profitable farmland investments (Market Watch)

www-marketwatch-com.cdn.ampproject.org

Last Updated: March 10, 2021 at 5:59 p.m. ET First Published: March 10, 2021 at 8:28 a.m. ET By

Vincent H. Smith and Eric J. Belasco

Congress has reduced risk by underwriting crop prices and cash revenues

Bill Gates is now the largest owner of farmland in the U.S. having made substantial investments in at least 19 states throughout the country. He has apparently followed the advice of another wealthy investor, Warren Buffett, who in a February 24, 2014 letter to investors described farmland as an investment that has “no downside and potentially substantial upside.”

There is a simple explanation for this affection for agricultural assets. Since the early 1980s, Congress has consistently succumbed to pressures from farm interest groups to remove as much risk as possible from agricultural enterprises by using taxpayer funds to underwrite crop prices and cash revenues.

Over the years, three trends in farm subsidy programs have emerged.

The first and most visible is the expansion of the federally supported crop insurance program, which has grown from less than $200 million in 1981 to over $8 billion in 2021. In 1980, only a few crops were covered and the government’s goal was just to pay for administrative costs. Today taxpayers pay over two-thirds of the total cost of the insurance programs that protect farmers against drops in prices and yields for hundreds of commodities ranging from organic oranges to GMO soybeans.

The second trend is the continuation of longstanding programs to protect farmers against relatively low revenues because of price declines and lower-than-average crop yields. The subsidies, which on average cost taxpayers over $5 billion a year, are targeted to major Corn Belt crops such as soybeans and wheat. Also included are other commodities such as peanuts, cotton and rice, which are grown in congressionally powerful districts in Georgia, the Carolinas, Texas, Arkansas, Mississippi and California.

The third, more recent trend is a return over the past four years to a 1970s practice: annual ad hoc “one off” programs justified by political expediency with support from the White House and Congress. These expenditures were $5.1 billion in 2018, $14.7 billion in 2019, and over $32 billion in 2020, of which $29 billion came from COVID relief funds authorized in the CARES Act. An additional $13 billion for farm subsidies was later included in the December 2020 stimulus bill.

If you are wondering why so many different subsidy programs are used to compensate farmers multiple times for the same price drops and other revenue losses, you are not alone. Our research indicates that many owners of large farms collect taxpayer dollars from all three sources. For many of the farms ranked in the top 10% in terms of sales, recent annual payments exceeded a quarter of a million dollars.

Farms with average or modest sales received much less. Their subsidies ranged from close to zero for small farms to a few thousand dollars for averaged-sized operations.

So what does all this have to do with Bill Gates, Warren Buffet and their love of farmland as an investment? In a financial environment in which real interest rates have been near zero or negative for almost two decades, the annual average inflation-adjusted (real) rate of return in agriculture (over 80% of which consists of land) has been about 5% for the past 30 years, despite some ups and downs, as this chart shows. It is a very solid investment for an owner who can hold on to farmland for the long term.

The overwhelming majority of farm owners can manage that because they have substantial amounts of equity (the sector-wide debt-to-equity ratio has been less than 14% for many years) and receive significant revenue from other sources.

Thus for almost all farm owners, and especially the largest 10% whose net equity averages over $6 million, as Buffet observed, there is little or no risk and lots of potential gain in owning and investing in agricultural land. 

Returns from agricultural land stem from two sources: asset appreciation — increases in land prices, which account for the majority of the gains — and net cash income from operating the land. As is well known, farmland prices are closely tied to expected future revenue. And these include generous subsidies, which have averaged 17% of annual net cash incomes over the past 50 years. In addition, Congress often provides substantial additional one-off payments in years when net cash income is likely to be lower than average, as in 2000 and 2001 when grain prices were relatively low and in 2019 and 2020.

It is possible for small-scale investors to buy shares in real-estate investment trusts (REITs) that own and manage agricultural land. However, as with all such investments, how a REIT is managed can be a substantive source of risk unrelated to the underlying value of the land assets, not all of which may be farm land.

Thanks to Congress and the average less affluent American taxpayer, farmers and other agricultural landowners get a steady and substantial return on their investments through subsidies that consistently guarantee and increase those revenues.

While many agricultural support programs are meant to “save the family farm,” the largest beneficiaries of agricultural subsidies are the richest landowners with the largest farms who, like Bill Gates and Warren Buffet, are scarcely in any need of taxpayer handouts.

Vincent H. Smith is director of agricultural studies at the American Enterprise Institute, a Washington, D.C. think tank, and professor of economics at Montana State University. Eric J. Belasco is a visiting scholar at AEI.

NOAA Acknowledges the New Reality of Hurricane Season (Gizmodo)

earther.gizmodo.com

Molly Taft, March 2, 2021


This combination of satellite images provided by the National Hurricane Center shows 30 hurricanes that occurred during the 2020 Atlantic hurricane season.
This combination of satellite images provided by the National Hurricane Center shows 30 hurricanes that occurred during the 2020 Atlantic hurricane season.

We’re one step closer to officially moving up hurricane season. The National Hurricane Center announced Tuesday that it would formally start issuing its hurricane season tropical weather outlooks on May 15 this year, bumping it up from the traditional start of hurricane season on June 1. The move comes after a recent spate of early season storms have raked the Atlantic.

Atlantic hurricane season runs from June 1 to November 30. That’s when conditions are most conducive to storm formation owing to warm air and water temperatures. (The Pacific ocean has its own hurricane season, which covers the same timeframe, but since waters are colder fewer hurricanes tend to form there than in the Atlantic.)

Storms have begun forming on the Atlantic earlier as ocean and air temperatures have increased due to climate change. Last year, Hurricane Arthur roared to life off the East Coast on May 16. That storm made 2020 the sixth hurricane season in a row to have a storm that formed earlier than the June 1 official start date. While the National Oceanic and Atmospheric Administration won’t be moving up the start of the season just yet, the earlier outlooks addresses the recent history.

“In the last decade, there have been 10 storms formed in the weeks before the traditional start of the season, which is a big jump,” said Sean Sublette, a meteorologist at Climate Central, who pointed out that the 1960s through 2010s saw between one and three storms each decade before the June 1 start date on average.

It might be tempting to ascribe this earlier season entirely to climate change warming the Atlantic. But technology also has a role to play, with more observations along the coast as well as satellites that can spot storms far out to sea.

“I would caution that we can’t just go, ‘hah, the planet’s warming, we’ve had to move the entire season!’” Sublette said. “I don’t think there’s solid ground for attribution of how much of one there is over the other. Weather folks can sit around and debate that for awhile.”

Earlier storms don’t necessarily mean more harmful ones, either. In fact, hurricanes earlier in the season tend to be weaker than the monsters that form in August and September when hurricane season is at its peak. But regardless of their strength, these earlier storms have generated discussion inside the NHC on whether to move up the official start date for the season, when the agency usually puts out two reports per day on hurricane activity. Tuesday’s step is not an official announcement of this decision, but an acknowledgement of the increased attention on early hurricanes.

“I would say that [Tuesday’s announcement] is the National Hurricane Center being proactive,” Sublette said. “Like hey, we know that the last few years it’s been a little busier in May than we’ve seen in the past five decades, and we know there is an awareness now, so we’re going to start issuing these reports early.”

While the jury is still out on whether climate change is pushing the season earlier, research has shown that the strongest hurricanes are becoming more common, and that climate change is likely playing a role. A study published last year found the odds of a storm becoming a major hurricanes—those Category 3 or stronger—have increase 49% in the basin since satellite monitoring began in earnest four decades ago. And when storms make landfall, sea level rise allows them to do more damage. So regardless of if climate change is pushing Atlantic hurricane season is getting earlier or not, the risks are increasing. Now, at least, we’ll have better warnings before early storms do hit.

5 Pandemic Mistakes We Keep Repeating (The Atlantic)

theatlantic.com

Zeynep Tufekci

February 26, 2021


We can learn from our failures.
Photo illustration showing a Trump press conference, a vaccine syringe, and Anthony Fauci
Alex Wong / Chet Strange/ Sarah Silbiger / Bloomberg / Getty / The Atlantic

When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.

One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.

The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.

This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.

Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.

The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.

Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.

The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.

Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.

Risk Compensation

One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.

The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.

But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefits overwhelm the ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predicted that wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmed exactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.

Rules in Place of Mechanisms and Intuitions

Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.

In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.

All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.

Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.

Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.

After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.

Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.

Outdoors? It’s the opposite.

There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.

And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.

We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.

Scolding and Shaming

Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.

Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?

On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.

While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditions necessary for more people to keep themselves safe.

And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.

Harm Reduction

Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.

As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””

Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.

Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.

And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.

The Balance Between Knowledge And Action

Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.

Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.

Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.

And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.

At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.

Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.

Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”

Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.

As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.

Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.

These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.

And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.

Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.

Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.

So why isn’t this story more widely appreciated?

Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.

Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”

So we didn’t get our initial vaccine jubilation.

But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.

People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.

Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.

As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.

And yet, from the beginning, a good chunk of the public-facing messaging and news articles implied or claimed that vaccines won’t protect you against infecting other people or that we didn’t know if they would, when both were false. I found myself trying to convince people in my own social network that vaccines weren’t useless against transmission, and being bombarded on social media with claims that they were.

What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.

A year into the pandemic, we’re still repeating the same mistakes.

The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.

Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.

An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.

You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.

There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.

And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.

The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.

Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.

The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.

Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associated with higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.

Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.

For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.

New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”

Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.

Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.

But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.

Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.

Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.

And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.

But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.

Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.

Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.

We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.

Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”

Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.

The Coronavirus Is Plotting a Comeback. Here’s Our Chance to Stop It for Good. (New York Times)

nytimes.com

Apoorva Mandavilli


Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic.
Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic. Credit: Lyndon French for The New York Times
Many scientists are expecting another rise in infections. But this time the surge will be blunted by vaccines and, hopefully, widespread caution. By summer, Americans may be looking at a return to normal life.

Published Feb. 25, 2021Updated Feb. 26, 2021, 12:07 a.m. ET

Across the United States, and the world, the coronavirus seems to be loosening its stranglehold. The deadly curve of cases, hospitalizations and deaths has yo-yoed before, but never has it plunged so steeply and so fast.

Is this it, then? Is this the beginning of the end? After a year of being pummeled by grim statistics and scolded for wanting human contact, many Americans feel a long-promised deliverance is at hand.

Americans will win against the virus and regain many aspects of their pre-pandemic lives, most scientists now believe. Of the 21 interviewed for this article, all were optimistic that the worst of the pandemic is past. This summer, they said, life may begin to seem normal again.

But — of course, there’s always a but — researchers are also worried that Americans, so close to the finish line, may once again underestimate the virus.

So far, the two vaccines authorized in the United States are spectacularly effective, and after a slow start, the vaccination rollout is picking up momentum. A third vaccine is likely to be authorized shortly, adding to the nation’s supply.

But it will be many weeks before vaccinations make a dent in the pandemic. And now the virus is shape-shifting faster than expected, evolving into variants that may partly sidestep the immune system.

The latest variant was discovered in New York City only this week, and another worrisome version is spreading at a rapid pace through California. Scientists say a contagious variant first discovered in Britain will become the dominant form of the virus in the United States by the end of March.

The road back to normalcy is potholed with unknowns: how well vaccines prevent further spread of the virus; whether emerging variants remain susceptible enough to the vaccines; and how quickly the world is immunized, so as to halt further evolution of the virus.

But the greatest ambiguity is human behavior. Can Americans desperate for normalcy keep wearing masks and distancing themselves from family and friends? How much longer can communities keep businesses, offices and schools closed?

Covid-19 deaths will most likely never rise quite as precipitously as in the past, and the worst may be behind us. But if Americans let down their guard too soon — many states are already lifting restrictions — and if the variants spread in the United States as they have elsewhere, another spike in cases may well arrive in the coming weeks.

Scientists call it the fourth wave. The new variants mean “we’re essentially facing a pandemic within a pandemic,” said Adam Kucharski, an epidemiologist at the London School of Hygiene and Tropical Medicine.

A patient received comfort in the I.C.U. of Marian Regional Medical Center in Santa Maria, Calif., last month. 
Credit: Daniel Dreifuss for The New York Times

The United States has now recorded 500,000 deaths amid the pandemic, a terrible milestone. As of Wednesday morning, at least 28.3 million people have been infected.

But the rate of new infections has tumbled by 35 percent over the past two weeks, according to a database maintained by The New York Times. Hospitalizations are down 31 percent, and deaths have fallen by 16 percent.

Yet the numbers are still at the horrific highs of November, scientists noted. At least 3,210 people died of Covid-19 on Wednesday alone. And there is no guarantee that these rates will continue to decrease.

“Very, very high case numbers are not a good thing, even if the trend is downward,” said Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health in Boston. “Taking the first hint of a downward trend as a reason to reopen is how you get to even higher numbers.”

In late November, for example, Gov. Gina Raimondo of Rhode Island limited social gatherings and some commercial activities in the state. Eight days later, cases began to decline. The trend reversed eight days after the state’s pause lifted on Dec. 20.

The virus’s latest retreat in Rhode Island and most other states, experts said, results from a combination of factors: growing numbers of people with immunity to the virus, either from having been infected or from vaccination; changes in behavior in response to the surges of a few weeks ago; and a dash of seasonality — the effect of temperature and humidity on the survival of the virus.

Parts of the country that experienced huge surges in infection, like Montana and Iowa, may be closer to herd immunity than other regions. But patchwork immunity alone cannot explain the declines throughout much of the world.

The vaccines were first rolled out to residents of nursing homes and to the elderly, who are at highest risk of severe illness and death. That may explain some of the current decline in hospitalizations and deaths.

A volunteer in the Johnson & Johnson vaccine trial received a shot in the Desmond Tutu H.I.V. Foundation Youth Center in Masiphumelele, South Africa, in December.
Credit: Joao Silva/The New York Times

But young people drive the spread of the virus, and most of them have not yet been inoculated. And the bulk of the world’s vaccine supply has been bought up by wealthy nations, which have amassed one billion more doses than needed to immunize their populations.

Vaccination cannot explain why cases are dropping even in countries where not a single soul has been immunized, like Honduras, Kazakhstan or Libya. The biggest contributor to the sharp decline in infections is something more mundane, scientists say: behavioral change.

Leaders in the United States and elsewhere stepped up community restrictions after the holiday peaks. But individual choices have also been important, said Lindsay Wiley, an expert in public health law and ethics at American University in Washington.

“People voluntarily change their behavior as they see their local hospital get hit hard, as they hear about outbreaks in their area,” she said. “If that’s the reason that things are improving, then that’s something that can reverse pretty quickly, too.”

The downward curve of infections with the original coronavirus disguises an exponential rise in infections with B.1.1.7, the variant first identified in Britain, according to many researchers.

“We really are seeing two epidemic curves,” said Ashleigh Tuite, an infectious disease modeler at the University of Toronto.

The B.1.1.7 variant is thought to be more contagious and more deadly, and it is expected to become the predominant form of the virus in the United States by late March. The number of cases with the variant in the United States has risen from 76 in 12 states as of Jan. 13 to more than 1,800 in 45 states now. Actual infections may be much higher because of inadequate surveillance efforts in the United States.

Buoyed by the shrinking rates over all, however, governors are lifting restrictions across the United States and are under enormous pressure to reopen completely. Should that occur, B.1.1.7 and the other variants are likely to explode.

“Everybody is tired, and everybody wants things to open up again,” Dr. Tuite said. “Bending to political pressure right now, when things are really headed in the right direction, is going to end up costing us in the long term.”

A fourth wave doesn’t have to be inevitable, scientists say, but the new variants will pose a significant challenge to averting that wave.
Credit: Lyndon French for The New York Times

Looking ahead to late March or April, the majority of scientists interviewed by The Times predicted a fourth wave of infections. But they stressed that it is not an inevitable surge, if government officials and individuals maintain precautions for a few more weeks.

A minority of experts were more sanguine, saying they expected powerful vaccines and an expanding rollout to stop the virus. And a few took the middle road.

“We’re at that crossroads, where it could go well or it could go badly,” said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

The vaccines have proved to be more effective than anyone could have hoped, so far preventing serious illness and death in nearly all recipients. At present, about 1.4 million Americans are vaccinated each day. More than 45 million Americans have received at least one dose.

A team of researchers at Fred Hutchinson Cancer Research Center in Seattle tried to calculate the number of vaccinations required per day to avoid a fourth wave. In a model completed before the variants surfaced, the scientists estimated that vaccinating just one million Americans a day would limit the magnitude of the fourth wave.

“But the new variants completely changed that,” said Dr. Joshua T. Schiffer, an infectious disease specialist who led the study. “It’s just very challenging scientifically — the ground is shifting very, very quickly.”

Natalie Dean, a biostatistician at the University of Florida, described herself as “a little more optimistic” than many other researchers. “We would be silly to undersell the vaccines,” she said, noting that they are effective against the fast-spreading B.1.1.7 variant.

But Dr. Dean worried about the forms of the virus detected in South Africa and Brazil that seem less vulnerable to the vaccines made by Pfizer and Moderna. (On Wednesday, Johnson & Johnson reported that its vaccine was relatively effective against the variant found in South Africa.)

Ccoronavirus test samples in a lab for genomic sequencing at Duke University in Durham, N.C., earlier this month.
Credit: Pete Kiehart for The New York Times

About 50 infections with those two variants have been identified in the United States, but that could change. Because of the variants, scientists do not know how many people who were infected and had recovered are now vulnerable to reinfection.

South Africa and Brazil have reported reinfections with the new variants among people who had recovered from infections with the original version of the virus.

“That makes it a lot harder to say, ‘If we were to get to this level of vaccinations, we’d probably be OK,’” said Sarah Cobey, an evolutionary biologist at the University of Chicago.

Yet the biggest unknown is human behavior, experts said. The sharp drop in cases now may lead to complacency about masks and distancing, and to a wholesale lifting of restrictions on indoor dining, sporting events and more. Or … not.

“The single biggest lesson I’ve learned during the pandemic is that epidemiological modeling struggles with prediction, because so much of it depends on human behavioral factors,” said Carl Bergstrom, a biologist at the University of Washington in Seattle.

Taking into account the counterbalancing rises in both vaccinations and variants, along with the high likelihood that people will stop taking precautions, a fourth wave is highly likely this spring, the majority of experts told The Times.

Kristian Andersen, a virologist at the Scripps Research Institute in San Diego, said he was confident that the number of cases will continue to decline, then plateau in about a month. After mid-March, the curve in new cases will swing upward again.

In early to mid-April, “we’re going to start seeing hospitalizations go up,” he said. “It’s just a question of how much.”

Hospitalizations and deaths will fall to levels low enough to reopen the country — though mask-wearing may remain necessary as a significant portion of people, including children, won’t be immunized.
Credit: Kendrick Brinson for The New York Times

Now the good news.

Despite the uncertainties, the experts predict that the last surge will subside in the United States sometime in the early summer. If the Biden administration can keep its promise to immunize every American adult by the end of the summer, the variants should be no match for the vaccines.

Combine vaccination with natural immunity and the human tendency to head outdoors as weather warms, and “it may not be exactly herd immunity, but maybe it’s sufficient to prevent any large outbreaks,” said Youyang Gu, an independent data scientist, who created some of the most prescient models of the pandemic.

Infections will continue to drop. More important, hospitalizations and deaths will fall to negligible levels — enough, hopefully, to reopen the country.

“Sometimes people lose vision of the fact that vaccines prevent hospitalization and death, which is really actually what most people care about,” said Stefan Baral, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health.

Even as the virus begins its swoon, people may still need to wear masks in public places and maintain social distance, because a significant percent of the population — including children — will not be immunized.

“Assuming that we keep a close eye on things in the summer and don’t go crazy, I think that we could look forward to a summer that is looking more normal, but hopefully in a way that is more carefully monitored than last summer,” said Emma Hodcroft, a molecular epidemiologist at the University of Bern in Switzerland.

Imagine: Groups of vaccinated people will be able to get together for barbecues and play dates, without fear of infecting one another. Beaches, parks and playgrounds will be full of mask-free people. Indoor dining will return, along with movie theaters, bowling alleys and shopping malls — although they may still require masks.

The virus will still be circulating, but the extent will depend in part on how well vaccines prevent not just illness and death, but also transmission. The data on whether vaccines stop the spread of the disease are encouraging, but immunization is unlikely to block transmission entirely.

Self-swab testing for Covid at Duke University in February.
Credit: Pete Kiehart for The New York Times

“It’s not zero and it’s not 100 — exactly where that number is will be important,” said Shweta Bansal, an infectious disease modeler at Georgetown University. “It needs to be pretty darn high for us to be able to get away with vaccinating anything below 100 percent of the population, so that’s definitely something we’re watching.”

Over the long term — say, a year from now, when all the adults and children in the United States who want a vaccine have received them — will this virus finally be behind us?

Every expert interviewed by The Times said no. Even after the vast majority of the American population has been immunized, the virus will continue to pop up in clusters, taking advantage of pockets of vulnerability. Years from now, the coronavirus may be an annoyance, circulating at low levels, causing modest colds.

Many scientists said their greatest worry post-pandemic was that new variants may turn out to be significantly less susceptible to the vaccines. Billions of people worldwide will remain unprotected, and each infection gives the virus new opportunities to mutate.

“We won’t have useless vaccines. We might have slightly less good vaccines than we have at the moment,” said Andrew Read, an evolutionary microbiologist at Penn State University. “That’s not the end of the world, because we have really good vaccines right now.”

For now, every one of us can help by continuing to be careful for just a few more months, until the curve permanently flattens.

“Just hang in there a little bit longer,” Dr. Tuite said. “There’s a lot of optimism and hope, but I think we need to be prepared for the fact that the next several months are likely to continue to be difficult.”

Credit: Lyndon French for The New York Times

Texas Power Grid Run by ERCOT Set Up the State for Disaster (New York Times)

nytimes.com

Clifford Krauss, Manny Fernandez, Ivan Penn, Rick Rojas – Feb 21, 2021


Texas has refused to join interstate electrical grids and railed against energy regulation. Now it’s having to answer to millions of residents who were left without power in last week’s snowstorm.

The cost of a free market electrical grid became painfully clear last week, as a snowstorm descended on Texas and millions of people ran out of power and water.
Credit: Nitashia Johnson for The New York Times

HOUSTON — Across the plains of West Texas, the pump jacks that resemble giant bobbing hammers define not just the landscape but the state itself: Texas has been built on the oil-and-gas business for the last 120 years, ever since the discovery of oil on Spindletop Hill near Beaumont in 1901.

Texas, the nation’s leading energy-producing state, seemed like the last place on Earth that could run out of energy.

Then last week, it did.

The crisis could be traced to that other defining Texas trait: independence, both from big government and from the rest of the country. The dominance of the energy industry and the “Republic of Texas” ethos became a devastating liability when energy stopped flowing to millions of Texans who shivered and struggled through a snowstorm that paralyzed much of the state.

Part of the responsibility for the near-collapse of the state’s electrical grid can be traced to the decision in 1999 to embark on the nation’s most extensive experiment in electrical deregulation, handing control of the state’s entire electricity delivery system to a market-based patchwork of private generators, transmission companies and energy retailers.

The energy industry wanted it. The people wanted it. Both parties supported it. “Competition in the electric industry will benefit Texans by reducing monthly rates and offering consumers more choices about the power they use,” George W. Bush, then the governor, said as he signed the top-to-bottom deregulation legislation.

Mr. Bush’s prediction of lower-cost power generally came true, and the dream of a free-market electrical grid worked reasonably well most of the time, in large part because Texas had so much cheap natural gas as well as abundant wind to power renewable energy. But the newly deregulated system came with few safeguards and even fewer enforced rules.

With so many cost-conscious utilities competing for budget-shopping consumers, there was little financial incentive to invest in weather protection and maintenance. Wind turbines are not equipped with the de-icing equipment routinely installed in the colder climes of the Dakotas and power lines have little insulation. The possibility of more frequent cold-weather events was never built into infrastructure plans in a state where climate change remains an exotic, disputed concept.

“Deregulation was something akin to abolishing the speed limit on an interstate highway,” said Ed Hirs, an energy fellow at the University of Houston. “That opens up shortcuts that cause disasters.”

The state’s entire energy infrastructure was walloped with glacial temperatures that even under the strongest of regulations might have frozen gas wells and downed power lines.

But what went wrong was far broader: Deregulation meant that critical rules of the road for power were set not by law, but rather by a dizzying array of energy competitors.

Utility regulation is intended to compensate for the natural monopolies that occur when a single electrical provider serves an area; it keeps prices down while protecting public safety and guaranteeing fair treatment to customers. Yet many states have flirted with deregulation as a way of giving consumers more choices and encouraging new providers, especially alternative energy producers.

California, one of the early deregulators in the 1990s, scaled back its initial foray after market manipulation led to skyrocketing prices and rolling blackouts.

States like Maryland allow customers to pick from a menu of producers. In some states, competing private companies offer varied packages like discounts for cheaper power at night. But no state has gone as far as Texas, which has not only turned over the keys to the free market but has also isolated itself from the national grid, limiting the state’s ability to import power when its own generators are foundering.

Consumers themselves got a direct shock last week when customers who had chosen variable-rate electricity contracts found themselves with power bills of $5,000 or more. While they were expecting extra-low monthly rates, many may now face huge bills as a result of the upswing in wholesale electricity prices during the cold wave. Gov. Greg Abbott on Sunday said the state’s Public Utility Commission has issued a moratorium on customer disconnections for non-payment and will temporarily restrict providers from issuing invoices.

A family in Austin, Texas, kept warm by a fire outside their apartment on Wednesday. They lost power early Monday morning.
Credit: Tamir Kalifa for The New York Times

There is regulation in the Texas system, but it is hardly robust. One nonprofit agency, the Electric Reliability Council of Texas, or ERCOT, was formed to manage the wholesale market. It is supervised by the Public Utility Commission, which also oversees the transmission companies that offer customers an exhaustive array of contract choices laced with more fine print than a credit card agreement.

But both agencies are nearly unaccountable and toothless compared to regulators in other regions, where many utilities have stronger consumer protections and submit an annual planning report to ensure adequate electricity supply. Texas energy companies are given wide latitude in their planning for catastrophic events.

One example of how Texas has gone it alone is its refusal to enforce a “reserve margin” of extra power available above expected demand, unlike all other power systems around North America. With no mandate, there is little incentive to invest in precautions for events, such as a Southern snowstorm, that are rare. Any company that took such precautions would put itself at a competitive disadvantage.

A surplus supply of natural gas, the dominant power fuel in Texas, near power plants might have helped avoid the cascade of failures in which power went off, forcing natural gas production and transmission offline, which in turn led to further power shortages.

In the aftermath of the dayslong outages, ERCOT has been criticized by both Democratic and Republican residents, lawmakers and business executives, a rare display of unity in a fiercely partisan and Republican-dominated state. Mr. Abbott said he supported calls for the agency’s leadership to resign and made ERCOT reform a priority for the Legislature. The reckoning has been swift — this week, lawmakers will hold hearings in Austin to investigate the agency’s handling of the storm and the rolling outages.

For ERCOT operators, the storm’s arrival was swift and fierce, but they had anticipated it and knew it would strain their system. They asked power customers across the state to conserve, warning that outages were likely.

But late on Sunday, Feb. 14, it rapidly became clear that the storm was far worse than they had expected: Sleet and snow fell, and temperatures plunged. In the council’s command center outside Austin, a room dominated by screens flashing with maps, graphics and data tracking the flow of electricity to 26 million people in Texas, workers quickly found themselves fending off a crisis. As weather worsened into Monday morning, residents cranked up their heaters and demand surged.

Power plants began falling offline in rapid succession as they were overcome by the frigid weather or ran out of fuel to burn. Within hours, 40 percent of the power supply had been lost.

The entire grid — carrying 90 percent of the electric load in Texas — was barreling toward a collapse.

Much of Austin lost power last week due to rolling blackouts.
Credit: Tamir Kalifa for The New York Times

In the electricity business, supply and demand need to be in balance. Imbalances lead to catastrophic blackouts. Recovering from a total blackout would be an agonizing and tedious process, known as a “black start,” that could take weeks, or possibly months.

And in the early-morning hours last Monday, the Texas grid was “seconds and minutes” away from such a collapse, said Bill Magness, the president and chief executive of the Electric Reliability Council.

“If we had allowed a catastrophic blackout to happen, we wouldn’t be talking today about hopefully getting most customers their power back,” Mr. Magness said. “We’d be talking about how many months it might be before you get your power back.”

The outages and the cold weather touched off an avalanche of failures, but there had been warnings long before last week’s storm.

After a heavy snowstorm in February 2011 caused statewide rolling blackouts and left millions of Texans in the dark, federal authorities warned the state that its power infrastructure had inadequate “winterization” protection. But 10 years later, pipelines remained inadequately insulated and heaters that might have kept instruments from freezing were never installed.

During heat waves, when demand has soared during several recent summers, the system in Texas has also strained to keep up, raising questions about lack of reserve capacity on the unregulated grid.

And aside from the weather, there have been periodic signs that the system can run into trouble delivering sufficient energy, in some cases because of equipment failures, in others because of what critics called an attempt to drive up prices, according to Mr. Hirs of the University of Houston, as well as several energy consultants.

Another potential safeguard might have been far stronger connections to the two interstate power-sharing networks, East and West, that allow states to link their electrical grids and obtain power from thousands of miles away when needed to hold down costs and offset their own shortfalls.

But Texas, reluctant to submit to the federal regulation that is part of the regional power grids, made decisions as far back as the early 20th century to become the only state in the continental United States to operate its own grid — a plan that leaves it able to borrow only from a few close neighbors.

The border city of El Paso survived the freeze much better than Dallas or Houston because it was not part of the Texas grid but connected to the much larger grid covering many Western states.

But the problems that began with last Monday’s storm went beyond an isolated electrical grid. The entire ecosystem of how Texas generates, transmits and uses power stalled, as millions of Texans shivered in darkened, unheated homes.

A surplus supply of natural gas, the dominant power fuel in Texas, near power plants might have helped avoid the cascade of failures.
Credit: Eddie Seal/Bloomberg

Texans love to brag about natural gas, which state officials often call the cleanest-burning fossil fuel. No state produces more, and gas-fired power plants produce nearly half the state’s electricity.

“We are struggling to come to grips with the reality that gas came up short and let us down when we needed it most,” said Michael E. Webber, a professor of mechanical engineering at the University of Texas at Austin.

The cold was so severe that the enormous oil and natural gas fields of West Texas froze up, or could not get sufficient power to operate. Though a few plants had stored gas reserves, there was insufficient electricity to pump it.

The leaders of ERCOT defended the organization, its lack of mandated reserves and the state’s isolation from larger regional grids, and said the blame for the power crisis lies with the weather, not the overall deregulated system in Texas.

“The historic, just about unprecedented, storm was the heart of the problem,” Mr. Magness, the council’s chief executive, said, adding: “We’ve found that this market structure works. It demands reliability. I don’t think there’s a silver-bullet market structure that could have managed the extreme lows and generation outages that we were facing Sunday night.”

In Texas, energy regulation is as much a matter of philosophy as policy. Its independent power grid is a point of pride that has been an applause line in Texas political speeches for decades.

Deregulation is a hot topic among Texas energy experts, and there has been no shortage of predictions that the grid could fail under stress. But there has not been widespread public dissatisfaction with the system, although many are now wondering if they are being well served.

“I believe there is great value in Texas being on its own grid and I believe we can do so safely and securely and confidently going forward,” said State Representative Jeff Leach, a Republican from Plano who has called for an investigation into what went wrong. “But it’s going to take new investment and some new strategic decisions to make sure we’re protected from this ever happening again.”

Steven D. Wolens, a former Democratic lawmaker from Dallas and a principal architect of the 1999 deregulation legislation, said deregulation was meant to spur more generation, including from renewable energy sources, and to encourage the mothballing of older plants that were spewing pollution. “We were successful,” said Mr. Wolens, who left the Legislature in 2005.

But the 1999 legislation was intended as a first iteration that would evolve along with the needs of the state, he said. “They can focus on it now and they can fix it now,” he said. “The buck stops with the Texas Legislature and they are in a perfect position to determine the basis of the failure, to correct it and make sure it never happens again.”

Clifford Krauss reported from Houston, Manny Fernandez and Ivan Penn from Los Angeles, and Rick Rojas from Nashville. David Montgomery contributed reporting from Austin, Texas.

Texas Blackouts Point to Coast-to-Coast Crises Waiting to Happen (New York Times)

nytimes.com

Christopher Flavelle, Brad Plumer, Hiroko Tabuchi – Feb 20, 2021


Traffic at a standstill on Interstate 35 in Kileen, Texas, on Thursday.
Traffic at a standstill on Interstate 35 in Kileen, Texas, on Thursday. Credit: Joe Raedle/Getty Images
Continent-spanning storms triggered blackouts in Oklahoma and Mississippi, halted one-third of U.S. oil production and disrupted vaccinations in 20 states.

Even as Texas struggled to restore electricity and water over the past week, signs of the risks posed by increasingly extreme weather to America’s aging infrastructure were cropping up across the country.

The week’s continent-spanning winter storms triggered blackouts in Texas, Oklahoma, Mississippi and several other states. One-third of oil production in the nation was halted. Drinking-water systems in Ohio were knocked offline. Road networks nationwide were paralyzed and vaccination efforts in 20 states were disrupted.

The crisis carries a profound warning. As climate change brings more frequent and intense storms, floods, heat waves, wildfires and other extreme events, it is placing growing stress on the foundations of the country’s economy: Its network of roads and railways, drinking-water systems, power plants, electrical grids, industrial waste sites and even homes. Failures in just one sector can set off a domino effect of breakdowns in hard-to-predict ways.

Much of this infrastructure was built decades ago, under the expectation that the environment around it would remain stable, or at least fluctuate within predictable bounds. Now climate change is upending that assumption.

“We are colliding with a future of extremes,” said Alice Hill, who oversaw planning for climate risks on the National Security Council during the Obama administration. “We base all our choices about risk management on what’s occurred in the past, and that is no longer a safe guide.”

While it’s not always possible to say precisely how global warming influenced any one particular storm, scientists said, an overall rise in extreme weather creates sweeping new risks.

Sewer systems are overflowing more often as powerful rainstorms exceed their design capacity. Coastal homes and highways are collapsing as intensified runoff erodes cliffs. Coal ash, the toxic residue produced by coal-burning plants, is spilling into rivers as floods overwhelm barriers meant to hold it back. Homes once beyond the reach of wildfires are burning in blazes they were never designed to withstand.

A broken water main in McComb., Miss. on Thursday.
Credit: Matt Williamson/The Enterprise-Journal, via Associated Press

Problems like these often reflect an inclination of governments to spend as little money as possible, said Shalini Vajjhala, a former Obama administration official who now advises cities on meeting climate threats. She said it’s hard to persuade taxpayers to spend extra money to guard against disasters that seem unlikely.

But climate change flips that logic, making inaction far costlier. “The argument I would make is, we can’t afford not to, because we’re absorbing the costs” later, Ms. Vajjhala said, after disasters strike. “We’re spending poorly.”

The Biden administration has talked extensively about climate change, particularly the need to reduce greenhouse gas emissions and create jobs in renewable energy. But it has spent less time discussing how to manage the growing effects of climate change, facing criticism from experts for not appointing more people who focus on climate resilience.

“I am extremely concerned by the lack of emergency-management expertise reflected in Biden’s climate team,” said Samantha Montano, an assistant professor at the Massachusetts Maritime Academy who focuses on disaster policy. “There’s an urgency here that still is not being reflected.”

A White House spokesman, Vedant Patel, said in a statement, “Building resilient and sustainable infrastructure that can withstand extreme weather and a changing climate will play an integral role in creating millions of good paying, union jobs” while cutting greenhouse gas emissions.

And while President Biden has called for a major push to refurbish and upgrade the nation’s infrastructure, getting a closely divided Congress to spend hundreds of billions, if not trillions of dollars, will be a major challenge.

Heightening the cost to society, disruptions can disproportionately affect lower-income households and other vulnerable groups, including older people or those with limited English.

“All these issues are converging,” said Robert D. Bullard, a professor at Texas Southern University who studies wealth and racial disparities related to the environment. “And there’s simply no place in this country that’s not going to have to deal with climate change.”

Flooding around Edenville Township, Mich., last year swept away a bridge over the Tittabawassee River.
Credit: Matthew Hatcher/Getty Images

In September, when a sudden storm dumped a record of more than two inches of water on Washington in less than 75 minutes, the result wasn’t just widespread flooding, but also raw sewage rushing into hundreds of homes.

Washington, like many other cities in the Northeast and Midwest, relies on what’s called a combined sewer overflow system: If a downpour overwhelms storm drains along the street, they are built to overflow into the pipes that carry raw sewage. But if there’s too much pressure, sewage can be pushed backward, into people’s homes — where the forces can send it erupting from toilets and shower drains.

This is what happened in Washington. The city’s system was built in the late 1800s. Now, climate change is straining an already outdated design.

DC Water, the local utility, is spending billions of dollars so that the system can hold more sewage. “We’re sort of in uncharted territory,” said Vincent Morris, a utility spokesman.

The challenge of managing and taming the nation’s water supplies — whether in streets and homes, or in vast rivers and watersheds — is growing increasingly complex as storms intensify. Last May, rain-swollen flooding breached two dams in Central Michigan, forcing thousands of residents to flee their homes and threatening a chemical complex and toxic waste cleanup site. Experts warned it was unlikely to be the last such failure.

Many of the country’s 90,000 dams were built decades ago and were already in dire need of repairs. Now climate change poses an additional threat, bringing heavier downpours to parts of the country and raising the odds that some dams could be overwhelmed by more water than they were designed to handle. One recent study found that most of California’s biggest dams were at increased risk of failure as global warming advances.

In recent years, dam-safety officials have begun grappling with the dangers. Colorado, for instance, now requires dam builders to take into account the risk of increased atmospheric moisture driven by climate change as they plan for worst-case flooding scenarios.

But nationwide, there remains a backlog of thousands of older dams that still need to be rehabilitated or upgraded. The price tag could ultimately stretch to more than $70 billion.

“Whenever we study dam failures, we often find there was a lot of complacency beforehand,” said Bill McCormick, president of the Association of State Dam Safety Officials. But given that failures can have catastrophic consequences, “we really can’t afford to be complacent.”

Crews repaired switches on utility poles damaged by the storms in Texas.
Credit: Tamir Kalifa for The New York Times

If the Texas blackouts exposed one state’s poor planning, they also provide a warning for the nation: Climate change threatens virtually every aspect of electricity grids that aren’t always designed to handle increasingly severe weather. The vulnerabilities show up in power lines, natural-gas plants, nuclear reactors and myriad other systems.

Higher storm surges can knock out coastal power infrastructure. Deeper droughts can reduce water supplies for hydroelectric dams. Severe heat waves can reduce the efficiency of fossil-fuel generators, transmission lines and even solar panels at precisely the moment that demand soars because everyone cranks up their air-conditioners.

Climate hazards can also combine in new and unforeseen ways.

In California recently, Pacific Gas & Electric has had to shut off electricity to thousands of people during exceptionally dangerous fire seasons. The reason: Downed power lines can spark huge wildfires in dry vegetation. Then, during a record-hot August last year, several of the state’s natural gas plants malfunctioned in the heat, just as demand was spiking, contributing to blackouts.

“We have to get better at understanding these compound impacts,” said Michael Craig, an expert in energy systems at the University of Michigan who recently led a study looking at how rising summer temperatures in Texas could strain the grid in unexpected ways. “It’s an incredibly complex problem to plan for.”

Some utilities are taking notice. After Superstorm Sandy in 2012 knocked out power for 8.7 million customers, utilities in New York and New Jersey invested billions in flood walls, submersible equipment and other technology to reduce the risk of failures. Last month, New York’s Con Edison said it would incorporate climate projections into its planning.

As freezing temperatures struck Texas, a glitch at one of two reactors at a South Texas nuclear plant, which serves 2 million homes, triggered a shutdown. The cause: Sensing lines connected to the plant’s water pumps had frozen, said Victor Dricks, a spokesman for the federal Nuclear Regulatory Agency.

It’s also common for extreme heat to disrupt nuclear power. The issue is that the water used to cool reactors can become too warm to use, forcing shutdowns.

Flooding is another risk.

After a tsunami led to several meltdowns at Japan’s Fukushima Daiichi power plant in 2011, the U.S. Nuclear Regulatory Commission told the 60 or so working nuclear plants in the United States, many decades old, to evaluate their flood risk to account for climate change. Ninety percent showed at least one type of flood risk that exceeded what the plant was designed to handle.

The greatest risk came from heavy rain and snowfall exceeding the design parameters at 53 plants.

Scott Burnell, an Nuclear Regulatory Commission spokesman, said in a statement, “The NRC continues to conclude, based on the staff’s review of detailed analyses, that all U.S. nuclear power plants can appropriately deal with potential flooding events, including the effects of climate change, and remain safe.”

A section of Highway 1 along the California coastline collapsed in January amid heavy rains.
Credit: Josh Edelson/Agence France-Presse — Getty Images

The collapse of a portion of California’s Highway 1 into the Pacific Ocean after heavy rains last month was a reminder of the fragility of the nation’s roads.

Several climate-related risks appeared to have converged to heighten the danger. Rising seas and higher storm surges have intensified coastal erosion, while more extreme bouts of precipitation have increased the landslide risk.

Add to that the effects of devastating wildfires, which can damage the vegetation holding hillside soil in place, and “things that wouldn’t have slid without the wildfires, start sliding,” said Jennifer M. Jacobs, a professor of civil and environmental engineering at the University of New Hampshire. “I think we’re going to see more of that.”

The United States depends on highways, railroads and bridges as economic arteries for commerce, travel and simply getting to work. But many of the country’s most important links face mounting climate threats. More than 60,000 miles of roads and bridges in coastal floodplains are already vulnerable to extreme storms and hurricanes, government estimates show. And inland flooding could also threaten at least 2,500 bridges across the country by 2050, a federal climate report warned in 2018.

Sometimes even small changes can trigger catastrophic failures. Engineers modeling the collapse of bridges over Escambia Bay in Florida during Hurricane Ivan in 2004 found that the extra three inches of sea-level rise since the bridge was built in 1968 very likely contributed to the collapse, because of the added height of the storm surge and force of the waves.

“A lot of our infrastructure systems have a tipping point. And when you hit the tipping point, that’s when a failure occurs,” Dr. Jacobs said. “And the tipping point could be an inch.”

Crucial rail networks are at risk, too. In 2017, Amtrak consultants found that along parts of the Northeast corridor, which runs from Boston to Washington and carries 12 million people a year, flooding and storm surge could erode the track bed, disable the signals and eventually put the tracks underwater.

And there is no easy fix. Elevating the tracks would require also raising bridges, electrical wires and lots of other infrastructure, and moving them would mean buying new land in a densely packed part of the country. So the report recommended flood barriers, costing $24 million per mile, that must be moved into place whenever floods threaten.

A worker checked efforts to prevent coal ash from escaping into the Waccamaw River in South Carolina after Hurricane Florence in 2018.
Credit: Randall Hill/Reuters

A series of explosions at a flood-damaged chemical plant outside Houston after Hurricane Harvey in 2017 highlighted a danger lurking in a world beset by increasingly extreme weather.

The blasts at the plant came after flooding knocked out the site’s electrical supply, shutting down refrigeration systems that kept volatile chemicals stable. Almost two dozen people, many of them emergency workers, were treated for exposure to the toxic fumes, and some 200 nearby residents were evacuated from their homes.

More than 2,500 facilities that handle toxic chemicals lie in federal flood-prone areas across the country, about 1,400 of them in areas at the highest risk of flooding, a New York Times analysis showed in 2018.

Leaks from toxic cleanup sites, left behind by past industry, pose another threat.

Almost two-thirds of some 1,500 superfund cleanup sites across the country are in areas with an elevated risk of flooding, storm surge, wildfires or sea level rise, a government audit warned in 2019. Coal ash, a toxic substance produced by coal power plants that is often stored as sludge in special ponds, have been particularly exposed. After Hurricane Florence in 2018, for example, a dam breach at the site of a power plant in Wilmington, N.C., released the hazardous ash into a nearby river.

“We should be evaluating whether these facilities or sites actually have to be moved or re-secured,” said Lisa Evans, senior counsel at Earthjustice, an environmental law organization. Places that “may have been OK in 1990,” she said, “may be a disaster waiting to happen in 2021.”

East Austin, Texas, during a blackout on Wednesday.  
Credit: Bronte Wittpenn/Austin American-Statesman, via Associated Press

Texas’s Power Crisis Has Turned Into a Disaster That Parallels Hurricane Katrina (TruthOut)

truthout.org

Sharon Zhang, Feb. 18, 2021


Propane tanks are placed in a line as people wait for the power to turn on to fill their tanks in Houston, Texas on February 17, 2021.
Propane tanks are placed in a line as people wait for the power to turn on to fill their tanks in Houston, Texas, on February 17, 2021. Mark Felix for The Washington Post via Getty Images

As many in Texas wake up still without power on Thursday morning, millions are now also having to contend with water shutdowns, boil advisories, and empty grocery shelves as cities struggle with keeping infrastructure powered and supply chains are interrupted.

As of estimates performed on Wednesday, 7 million Texans were under a boil advisory. Since then, Austin has also issued a citywide water-boil notice due to power loss at their biggest water treatment plant. Austin Water serves over a million customers, according to its website.

With hundreds of thousands of people still without power in the state, some contending that they have no water coming out of their faucets at all, and others facing burst pipes leading to collapsed ceilings and other damage to their homes, the situation is dire for many Texans facing multiple problems at once.

Even as some residents are getting their power restored, the problems are only continuing to layer as the only grocery stores left open were quickly selling out of food and supplies. As many without power watched their refrigerated food spoil, lines to get into stores wrapped around blocks and buildings and store shelves sat completely empty with no indication of when new shipments would be coming in. Food banks have had to cancel deliveries and schools to halt meal distribution to students, the Texas Tribune reports.

People experiencing homelessness, including a disproportionate number of Black residents, have especially suffered in the record cold temperatures across the state. There have been some reports of people being found dead in the streets because of a lack of shelter.

“Businesses are shut down. Streets are empty, other than a few guys sliding around in 4x4s and fire trucks rushing to rescue people who turn their ovens on to keep warm and poison themselves with carbon monoxide,” wrote Austin resident Jeff Goodell in Rolling Stone. “Yesterday, the line at our neighborhood grocery store was three blocks long. People wandering around with handguns on their hip adds to a sense of lawlessness (Texas is an open-carry state).”

The Texas agricultural commissioner has said that farmers and ranchers are having to throw away millions of dollars worth of goods because of a lack of power. “We’re looking at a food supply chain problem like we’ve never seen before, even with COVID-19,” he told one local news affiliate.

An energy analyst likened the power crisis to the fallout of Hurricane Katrina as it’s becoming increasingly clear that the situation in Texas is a statewide disaster.

As natural gas output declined dramatically in the state, Paul Sankey, who leads energy analyst firm Sankey Research, said on Bloomberg, “This situation to me is very reminiscent of Hurricane Katrina…. We have never seen a loss [of energy supply] at this scale” in mid-winter. This is “the biggest outage in the history [of] U.S. oil and gas,” Sankey said.

Many others online echoed Sankey’s words as “Katrina” trended on Twitter, saying that the situation is similar to the hurricane disaster in that it has been downplayed by politicians but may be uncovered to be even more serious in the coming weeks.

Experts say that the power outages have partially been caused by the deregulation of the state’s electric grid. The government, some say, favored deregulatory actions like not requiring electrical equipment upgrades or proper weatherization, instead relying on free market mechanisms that ultimately contributed to the current disaster.

Former Gov. Rick Perry faced criticism on Wednesday when he said that Texans would rather face the current disaster than have to be regulated by the federal government. And he’s not the only Republican currently catching heat — many have begun calling for the resignation of Gov. Greg Abbott for a failure of leadership. On Wednesday, as millions suffered without power and under boil-water advisories, the governor went on Fox to attack clean energy, which experts say was not a major contributor to the current crisis, and the Green New Deal.

After declaring a state of emergency in the state over the weekend, the Joe Biden administration announced on Wednesday that it would be sending generators and other supplies to the state.

The freeze in Texas exposes America’s infrastructural failings (The Economist)

economist.com

Feb 17th 2021

You ain’t foolin’ nobody with the lights out

WHEN IT RAINS, it pours, and when it snows, the lights turn off. Or so it goes in Texas. After a winter storm pummelled the Lone Star State with record snowfall and the lowest temperatures in more than 30 years, millions were left without electricity and heat. On February 16th 4.5m Texan households were cut off from power, as providers were overloaded with demand and tried to shuffle access to electricity so the whole grid did not go down.

Whole skylines, including Dallas’s, went dark to conserve power. Some Texans braved the snowy roads to check into the few hotels with remaining rooms, only for the hotels’ power to go off as they arrived. Others donned skiwear and remained inside, hoping the lights and heat would come back on. Across the state, what were supposed to be “rolling” blackouts lasted for days. It is still too soon to quantify the devastation. More than 20 people have died in motor accidents, from fires lit for warmth and from carbon-monoxide poisoning from using cars for heat. The storm has also halted deliveries of covid-19 vaccines and may prevent around 1m vaccinations from happening this week. Several retail electricity providers are likely to go bankrupt, after being hit with surging wholesale power prices.

Other states, including Tennessee, were also covered in snow, but Texas got the lion’s share and ground to a halt. Texans are rightly furious that residents of America’s energy capital cannot count on reliable power. Everyone is asking why.

The short answer is that the Electric Reliability Council of Texas (ERCOT), which operates the grid, did not properly forecast the demand for energy as a result of the storm. Some say that this was nearly impossible to predict, but there were warnings of the severity of the coming weather in the preceding week, and ERCOT’s projections were notably short. Brownouts last summer had already demonstrated the grid’s lack of excess capacity, says George O’Leary of Tudor, Pickering, Holt & CO (TPH), an energy investment bank.

Many Republican politicians were quick to blame renewable energy sources, such as wind power, for the blackouts, but that is not fair. Some wind turbines did indeed freeze, but natural gas, which accounts for around half of the state’s electricity generation, was the primary source of the shortfall. Plants broke down, as did the gas supply chain and pipelines. The cold also caused a reactor at one of the state’s two nuclear plants to go offline. Transmission lines may have also iced up, says Wade Schauer of Wood Mackenzie, an energy-research firm. In short, Texas experienced a perfect storm.

Some of the blame falls on the unique design of the electricity market in Texas. Of America’s 48 contiguous states, it is the only one with its own stand-alone electricity grid—the Texas Interconnection. This means that when power generators fail, the state cannot import electricity from outside its borders.

The state’s deregulated power market is also fiercely competitive. ERCOT oversees the grid, while power generators produce electricity for the wholesale market. Some 300 retail electricity providers buy that fuel and then compete for consumers. Because such cold weather is rare, energy companies do not invest in “winterising” their equipment, as this would raise their prices for consumers. Perhaps most important, the state does not have a “capacity market”, which would ensure that there was extra power available for surging demand. This acts as a sort of insurance policy so the lights will not go out, but it also means customers pay higher bills.

For years the benefits of Texas’s deregulated market structure were clear. At 8.6 cents per kilowatt hour, the state’s average retail price for electricity is around one-fifth lower than the national average and about half the cost of California’s. In 1999 the state set targets for renewables, and today it accounts for around 30% of America’s wind energy.

This disaster is prompting people to question whether Texas’s system is as resilient and well-designed as people previously believed. Greg Abbott, the governor, has called for an investigation into ERCOT. This storm “has exposed some serious weaknesses in our free-market approach in Texas”, says Luke Metzger of Environment Texas, a non-profit, who had been without power for 54 hours when The Economist went to press.

Wholly redesigning the power grid in Texas seems unlikely. After the snow melts, the state will need to tackle two more straightforward questions. The first is whether it needs to increase reserve capacity. “If we impose a capacity market here and a bunch of new cap-ex is required to winterise equipment, who bears that cost? Ultimately it’s the customer,” says Bobby Tudor, chairman of TPH. The second is how Texas can ensure the reliability of equipment in extreme weather conditions. After a polar vortex in 2014 hit the east coast, PJM, a regional transmission organisation, started making higher payments based on reliability of service, says Michael Weinstein of Credit Suisse, a bank. In Texas there is no penalty for systems going down, except for public complaints and politicians’ finger-pointing.

Texas is hardly the only state to struggle with blackouts. California, which has a more tightly regulated power market, is regularly plunged into darkness during periods of high heat, winds and wildfires. Unlike Texas, much of northern California is dependent on a single utility, PG&E. The company has been repeatedly sued for dismal, dangerous management. But, as in Texas, critics have blamed intermittent renewable power for blackouts. In truth, California’s blackouts share many of the same causes as those in Texas: extreme weather, power generators that failed unexpectedly, poor planning by state regulators and an inability (in California, temporary) to import power from elsewhere. In California’s blackouts last year, solar output naturally declined in the evening. But gas plants also went offline and weak rainfall lowered the output of hydroelectric dams.

In California, as in Texas, it would help to have additional power generation, energy storage to meet peak demand and more resilient infrastructure, such as buried power lines and more long-distance, high-voltage transmission. Weather events that once might have been dismissed as unusual are becoming more common. Without more investment in electricity grids, blackouts will be, too.

A Glimpse of America’s Future: Climate Change Means Trouble for Power Grids (New York Times)

nytimes.com

Brad Plumer, Feb. 17, 2021


Systems are designed to handle spikes in demand, but the wild and unpredictable weather linked to global warming will very likely push grids beyond their limits.
A street in Austin, Texas, without power on Monday evening.
Credit: Tamir Kalifa for The New York Times

Published Feb. 16, 2021Updated Feb. 17, 2021, 6:59 a.m. ET

Huge winter storms plunged large parts of the central and southern United States into an energy crisis this week, with frigid blasts of Arctic weather crippling electric grids and leaving millions of Americans without power amid dangerously cold temperatures.

The grid failures were most severe in Texas, where more than four million people woke up Tuesday morning to rolling blackouts. Separate regional grids in the Southwest and Midwest also faced serious strain. As of Tuesday afternoon, at least 23 people nationwide had died in the storm or its aftermath.

Analysts have begun to identify key factors behind the grid failures in Texas. Record-breaking cold weather spurred residents to crank up their electric heaters and pushed power demand beyond the worst-case scenarios that grid operators had planned for. At the same time, a large fraction of the state’s gas-fired power plants were knocked offline amid icy conditions, with some plants suffering fuel shortages as natural gas demand spiked. Many of Texas’ wind turbines also froze and stopped working.

The crisis sounded an alarm for power systems throughout the country. Electric grids can be engineered to handle a wide range of severe conditions — as long as grid operators can reliably predict the dangers ahead. But as climate change accelerates, many electric grids will face extreme weather events that go far beyond the historical conditions those systems were designed for, putting them at risk of catastrophic failure.

While scientists are still analyzing what role human-caused climate change may have played in this week’s winter storms, it is clear that global warming poses a barrage of additional threats to power systems nationwide, including fiercer heat waves and water shortages.

Measures that could help make electric grids more robust — such as fortifying power plants against extreme weather, or installing more backup power sources — could prove expensive. But as Texas shows, blackouts can be extremely costly, too. And, experts said, unless grid planners start planning for increasingly wild and unpredictable climate conditions, grid failures will happen again and again.

“It’s essentially a question of how much insurance you want to buy,” said Jesse Jenkins, an energy systems engineer at Princeton University. “What makes this problem even harder is that we’re now in a world where, especially with climate change, the past is no longer a good guide to the future. We have to get much better at preparing for the unexpected.”

Texas’ main electric grid, which largely operates independently from the rest of the country, has been built with the state’s most common weather extremes in mind: soaring summer temperatures that cause millions of Texans to turn up their air-conditioners all at once.

While freezing weather is rarer, grid operators in Texas have also long known that electricity demand can spike in the winter, particularly after damaging cold snaps in 2011 and 2018. But this week’s winter storms, which buried the state in snow and ice, and led to record-cold temperatures, surpassed all expectations — and pushed the grid to its breaking point.

Residents of East Dallas trying to warm up on Monday after their family home lost power.
Credit: Juan Figueroa/The Dallas Morning News, via Associated Press

Texas’ grid operators had anticipated that, in the worst case, the state would use 67 gigawatts of electricity during the winter peak. But by Sunday evening, power demand had surged past that level. As temperatures dropped, many homes were relying on older, inefficient electric heaters that consume more power.

The problems compounded from there, with frigid weather on Monday disabling power plants with capacity totaling more than 30 gigawatts. The vast majority of those failures occurred at thermal power plants, like natural gas generators, as plummeting temperatures paralyzed plant equipment and soaring demand for natural gas left some plants struggling to obtain sufficient fuel. A number of the state’s power plants were also offline for scheduled maintenance in preparation for the summer peak.

The state’s fleet of wind farms also lost up to 4.5 gigawatts of capacity at times, as many turbines stopped working in cold and icy conditions, though this was a smaller part of the problem.

In essence, experts said, an electric grid optimized to deliver huge quantities of power on the hottest days of the year was caught unprepared when temperatures plummeted.

While analysts are still working to untangle all of the reasons behind Texas’ grid failures, some have also wondered whether the unique way the state manages its largely deregulated electricity system may have played a role. In the mid-1990s, for instance, Texas decided against paying energy producers to hold a fixed number of backup power plants in reserve, instead letting market forces dictate what happens on the grid.

On Tuesday, Gov. Greg Abbott called for an emergency reform of the Electric Reliability Council of Texas, the nonprofit corporation that oversees the flow of power in the state, saying its performance had been “anything but reliable” over the previous 48 hours.

In theory, experts said, there are technical solutions that can avert such problems.

Wind turbines can be equipped with heaters and other devices so that they can operate in icy conditions — as is often done in the upper Midwest, where cold weather is more common. Gas plants can be built to store oil on-site and switch over to burning the fuel if needed, as is often done in the Northeast, where natural gas shortages are common. Grid regulators can design markets that pay extra to keep a larger fleet of backup power plants in reserve in case of emergencies, as is done in the Mid-Atlantic.

But these solutions all cost money, and grid operators are often wary of forcing consumers to pay extra for safeguards.

“Building in resilience often comes at a cost, and there’s a risk of both underpaying but also of overpaying,” said Daniel Cohan, an associate professor of civil and environmental engineering at Rice University. “It’s a difficult balancing act.”

In the months ahead, as Texas grid operators and policymakers investigate this week’s blackouts, they will likely explore how the grid might be bolstered to handle extremely cold weather. Some possible ideas include: Building more connections between Texas and other states to balance electricity supplies, a move the state has long resisted; encouraging homeowners to install battery backup systems; or keeping additional power plants in reserve.

The search for answers will be complicated by climate change. Over all, the state is getting warmer as global temperatures rise, and cold-weather extremes are, on average, becoming less common over time.

But some climate scientists have also suggested that global warming could, paradoxically, bring more unusually fierce winter storms. Some research indicates that Arctic warming is weakening the jet stream, the high-level air current that circles the northern latitudes and usually holds back the frigid polar vortex. This can allow cold air to periodically escape to the South, resulting in episodes of bitter cold in places that rarely get nipped by frost.

ImageCredit: Jacob Ford/Odessa American, via Associated Press

But this remains an active area of debate among climate scientists, with some experts less certain that polar vortex disruptions are becoming more frequent, making it even trickier for electricity planners to anticipate the dangers ahead.

All over the country, utilities and grid operators are confronting similar questions, as climate change threatens to intensify heat waves, floods, water shortages and other calamities, all of which could create novel risks for the nation’s electricity systems. Adapting to those risks could carry a hefty price tag: One recent study found that the Southeast alone may need 35 percent more electric capacity by 2050 simply to deal with the known hazards of climate change.

And the task of building resilience is becoming increasingly urgent. Many policymakers are promoting electric cars and electric heating as a way of curbing greenhouse gas emissions. But as more of the nation’s economy depends on reliable flows of electricity, the cost of blackouts will become ever more dire.

“This is going to be a significant challenge,” said Emily Grubert, an infrastructure expert at Georgia Tech. “We need to decarbonize our power systems so that climate change doesn’t keep getting worse, but we also need to adapt to changing conditions at the same time. And the latter alone is going to be very costly. We can already see that the systems we have today aren’t handling this very well.”

John Schwartz, Dave Montgomery and Ivan Penn contributed reporting.

Climate crisis: world is at its hottest for at least 12,000 years – study (The Guardian)

theguardian.com

Damian Carrington, Environment editor @dpcarrington

Wed 27 Jan 2021 16.00 GMT

The world’s continuously warming climate is revealed also in contemporary ice melt at glaciers, such as with this one in the Kenai mountains, Alaska (seen September 2019). Photograph: Joe Raedle/Getty Images

The planet is hotter now than it has been for at least 12,000 years, a period spanning the entire development of human civilisation, according to research.

Analysis of ocean surface temperatures shows human-driven climate change has put the world in “uncharted territory”, the scientists say. The planet may even be at its warmest for 125,000 years, although data on that far back is less certain.

The research, published in the journal Nature, reached these conclusions by solving a longstanding puzzle known as the “Holocene temperature conundrum”. Climate models have indicated continuous warming since the last ice age ended 12,000 years ago and the Holocene period began. But temperature estimates derived from fossil shells showed a peak of warming 6,000 years ago and then a cooling, until the industrial revolution sent carbon emissions soaring.

This conflict undermined confidence in the climate models and the shell data. But it was found that the shell data reflected only hotter summers and missed colder winters, and so was giving misleadingly high annual temperatures.

“We demonstrate that global average annual temperature has been rising over the last 12,000 years, contrary to previous results,” said Samantha Bova, at Rutgers University–New Brunswick in the US, who led the research. “This means that the modern, human-caused global warming period is accelerating a long-term increase in global temperatures, making today completely uncharted territory. It changes the baseline and emphasises just how critical it is to take our situation seriously.”

The world may be hotter now than any time since about 125,000 years ago, which was the last warm period between ice ages. However, scientists cannot be certain as there is less data relating to that time.

One study, published in 2017, suggested that global temperatures were last as high as today 115,000 years ago, but that was based on less data.

The new research is published in the journal Nature and examined temperature measurements derived from the chemistry of tiny shells and algal compounds found in cores of ocean sediments, and solved the conundrum by taking account of two factors.

First, the shells and organic materials had been assumed to represent the entire year but in fact were most likely to have formed during summer when the organisms bloomed. Second, there are well-known predictable natural cycles in the heating of the Earth caused by eccentricities in the orbit of the planet. Changes in these cycles can lead to summers becoming hotter and winters colder while average annual temperatures change only a little.

Combining these insights showed that the apparent cooling after the warm peak 6,000 years ago, revealed by shell data, was misleading. The shells were in fact only recording a decline in summer temperatures, but the average annual temperatures were still rising slowly, as indicated by the models.

“Now they actually match incredibly well and it gives us a lot of confidence that our climate models are doing a really good job,” said Bova.

The study looked only at ocean temperature records, but Bova said: “The temperature of the sea surface has a really controlling impact on the climate of the Earth. If we know that, it is the best indicator of what global climate is doing.”

She led a research voyage off the coast of Chile in 2020 to take more ocean sediment cores and add to the available data.

Jennifer Hertzberg, of Texas A&M University in the US, said: “By solving a conundrum that has puzzled climate scientists for years, Bova and colleagues’ study is a major step forward. Understanding past climate change is crucial for putting modern global warming in context.”

Lijing Cheng, at the International Centre for Climate and Environment Sciences in Beijing, China, recently led a study that showed that in 2020 the world’s oceans reached their hottest level yet in instrumental records dating back to the 1940s. More than 90% of global heating is taken up by the seas.

Cheng said the new research was useful and intriguing. It provided a method to correct temperature data from shells and could also enable scientists to work out how much heat the ocean absorbed before the industrial revolution, a factor little understood.

The level of carbon dioxide today is at its highest for about 4m years and is rising at the fastest rate for 66m years. Further rises in temperature and sea level are inevitable until greenhouse gas emissions are cut to net zero.

Cálculos mostram que será impossível controlar uma Inteligência Artificial super inteligente (Engenharia é:)

engenhariae.com.br

Ademilson Ramos, 23 de janeiro de 2021


Foto de Alex Knight no Unsplash

A ideia da inteligência artificial derrubar a humanidade tem sido discutida por muitas décadas, e os cientistas acabaram de dar seu veredicto sobre se seríamos capazes de controlar uma superinteligência de computador de alto nível. A resposta? Quase definitivamente não.

O problema é que controlar uma superinteligência muito além da compreensão humana exigiria uma simulação dessa superinteligência que podemos analisar. Mas se não formos capazes de compreendê-lo, é impossível criar tal simulação.

Regras como ‘não causar danos aos humanos’ não podem ser definidas se não entendermos o tipo de cenário que uma IA irá criar, sugerem os pesquisadores. Uma vez que um sistema de computador está trabalhando em um nível acima do escopo de nossos programadores, não podemos mais estabelecer limites.

“Uma superinteligência apresenta um problema fundamentalmente diferente daqueles normalmente estudados sob a bandeira da ‘ética do robô’”, escrevem os pesquisadores.

“Isso ocorre porque uma superinteligência é multifacetada e, portanto, potencialmente capaz de mobilizar uma diversidade de recursos para atingir objetivos que são potencialmente incompreensíveis para os humanos, quanto mais controláveis.”

Parte do raciocínio da equipe vem do problema da parada apresentado por Alan Turing em 1936. O problema centra-se em saber se um programa de computador chegará ou não a uma conclusão e responderá (para que seja interrompido), ou simplesmente ficar em um loop eterno tentando encontrar uma.

Como Turing provou por meio de uma matemática inteligente, embora possamos saber isso para alguns programas específicos, é logicamente impossível encontrar uma maneira que nos permita saber isso para cada programa potencial que poderia ser escrito. Isso nos leva de volta à IA, que, em um estado superinteligente, poderia armazenar todos os programas de computador possíveis em sua memória de uma vez.

Qualquer programa escrito para impedir que a IA prejudique humanos e destrua o mundo, por exemplo, pode chegar a uma conclusão (e parar) ou não – é matematicamente impossível para nós estarmos absolutamente seguros de qualquer maneira, o que significa que não pode ser contido.

“Na verdade, isso torna o algoritmo de contenção inutilizável”, diz o cientista da computação Iyad Rahwan, do Instituto Max-Planck para o Desenvolvimento Humano, na Alemanha.

A alternativa de ensinar alguma ética à IA e dizer a ela para não destruir o mundo – algo que nenhum algoritmo pode ter certeza absoluta de fazer, dizem os pesquisadores – é limitar as capacidades da superinteligência. Ele pode ser cortado de partes da Internet ou de certas redes, por exemplo.

O novo estudo também rejeita essa ideia, sugerindo que isso limitaria o alcance da inteligência artificial – o argumento é que se não vamos usá-la para resolver problemas além do escopo dos humanos, então por que criá-la?

Se vamos avançar com a inteligência artificial, podemos nem saber quando chega uma superinteligência além do nosso controle, tal é a sua incompreensibilidade. Isso significa que precisamos começar a fazer algumas perguntas sérias sobre as direções que estamos tomando.

“Uma máquina superinteligente que controla o mundo parece ficção científica”, diz o cientista da computação Manuel Cebrian, do Instituto Max-Planck para o Desenvolvimento Humano. “Mas já existem máquinas que executam certas tarefas importantes de forma independente, sem que os programadores entendam totalmente como as aprenderam.”

“Portanto, surge a questão de saber se isso poderia em algum momento se tornar incontrolável e perigoso para a humanidade.”

A pesquisa foi publicada no Journal of Artificial Intelligence Research.

Developing Algorithms That Might One Day Be Used Against You (Gizmodo)

gizmodo.com

Ryan F. Mandelbaum, Jan 24, 2021


Brian Nord is an astrophysicist and machine learning researcher.
Brian Nord is an astrophysicist and machine learning researcher. Photo: Mark Lopez/Argonne National Laboratory

Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.

Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.

This interview has been edited and condensed for clarity.

Gizmodo: How did you become a physicist interested in AI and its pitfalls?

Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.

Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.

Gizmodo: So what is deep learning? Why is it good and why is it bad?

BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.

On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.

The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.

Gizmodo: When did the light bulb go off in your head that AI could be harmful?

BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.

Gizmodo: How did you react? What did you do?

BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.

It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.

Gizmodo: How is it going so far?

BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.

We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.

Gizmodo: Why haven’t we reached that critical mass yet?

BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.

Gizmodo: What is your vision for a future of ethical AI?

BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.

Gizmodo: What are your biggest fears about all of this?

BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.

Gizmodo: How has thinking about AI ethics affected your own research?

BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.

I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.

Gizmodo: Anything else?

BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.

This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.

And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?

Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds

Papa Francisco pede orações para robôs e IA (Tecmundo)

11/11/2020 às 18:30 1 min de leitura

Imagem de: Papa Francisco pede orações para robôs e IA

Jorge Marin

O Papa Francisco pediu aos fiéis do mundo inteiro para que, durante o mês de novembro, rezem para que o progresso da robótica e da inteligência artificial (IA) possam sempre servir a humanidade.

A mensagem faz parte de uma série de intenções de oração que o pontífice divulga anualmente, e compartilha a cada mês no YouTube para auxiliar os católicos a “aprofundar sua oração diária”, concentrando-se em tópicos específicos. Em setembro, o papa pediu orações para o “compartilhamento dos recursos do planeta”; em agosto, para o “mundo marítimo”; e agora chegou a vez dos robôs e da IA.

Na sua mensagem, o Papa Francisco pediu uma atenção especial para a IA que, segundo ele, está “no centro da mudança histórica que estamos experimentando”. E que não se trata apenas dos benefícios que a robótica pode trazer para o mundo.

Progresso tecnológico e algoritmos

Francisco afirma que nem sempre o progresso tecnológico é sinal de bem-estar para a humanidade, pois, se esse progresso contribuir para aumentar as desigualdades, não poderá ser considerado como um progresso verdadeiro. “Os avanços futuros devem ser orientados para o respeito à dignidade da pessoa”, alerta o papa.

A preocupação com que a tecnologia possa aumentar as divisões sociais já existentes levou o Vaticano assinar no início deste ano, em conjunto com a Microsoft e a IBM, a “Chamada de Roma por Ética de IA”, um documento em que são fixados alguns princípios para orientar a implantação da IA: transparência, inclusão, imparcialidade e confiabilidade.

Mesmo pessoas não religiosas são capazes de reconhecer que, quando se trata de implantar algoritmos, a preocupação do papa faz todo o sentido.

How will AI shape our lives post-Covid? (BBC)

Original article

BBC, 09 Nov 2020

Audrey Azoulay: Director-General, Unesco
How will AI shape our lives post-Covid?

Covid-19 is a test like no other. Never before have the lives of so many people around the world been affected at this scale or speed.

Over the past six months, thousands of AI innovations have sprung up in response to the challenges of life under lockdown. Governments are mobilising machine-learning in many ways, from contact-tracing apps to telemedicine and remote learning.

However, as the digital transformation accelerates exponentially, it is highlighting the challenges of AI. Ethical dilemmas are already a reality – including privacy risks and discriminatory bias.

It is up to us to decide what we want AI to look like: there is a legislative vacuum that needs to be filled now. Principles such as proportionality, inclusivity, human oversight and transparency can create a framework allowing us to anticipate these issues.

This is why Unesco is working to build consensus among 193 countries to lay the ethical foundations of AI. Building on these principles, countries will be able to develop national policies that ensure AI is designed, developed and deployed in compliance with fundamental human values.

As we face new, previously unimaginable challenges – like the pandemic – we must ensure that the tools we are developing work for us, and not against us.

Geoengenharia solar não deve ser descartada, segundo cientistas (TecMundo)

03/11/2020 às 19:00 3 min de leitura

Imagem de: Geoengenharia solar não deve ser descartada, segundo cientistas

Reinaldo Zaruvni

Antes encaradas com desconfiança pela comunidade científica, as metodologias de intervenção artificial no meio ambiente com o objetivo de frear os efeitos devastadores do aquecimento global estão sendo consideradas agora como recursos a serem aplicados em última instância (já que iniciativas para reduzir a emissão de gases dependem diretamente da ação coletiva e demandam décadas para que tenham algum tipo de efeito benéfico). É possível que não tenhamos esse tempo, de acordo com alguns pesquisadores da área, os quais têm atraído investimentos e muita atenção.

Fazendo parte de um campo também referenciado como geoengenharia solar, grande parte dos métodos se vale da emissão controlada de partículas na atmosfera, responsáveis por barrar a energia recebida pelo nosso planeta e direcioná-la novamente ao espaço, criando uma espécie de resfriamento semelhante ao gerado por erupções vulcânicas.

Ainda que não atuem sobre a poluição, por exemplo, cientistas consideram que, diante de tempestades cada vez mais agressivas, tornados de fogo, inundações e outros desastres naturais, tais ações seriam interessantes enquanto soluções mais eficazes não são desenvolvidas.

Diretor do Sabin Center for Climate Change Law, na Columbia Law School, e editor de um livro sobre a tecnologia e suas implicações legais, Michael Gerrard exemplificou a situação em entrevista ao The New York Times: “Estamos enfrentando uma ameaça existencial. Por isso, é necessário que analisemos todas as opções”.

“Gosto de comparar a geoengenharia a uma quimioterapia para o planeta: se todo o resto estiver falhando, resta apenas tentar”, ele defendeu.

Desastres naturais ocasionados pelo aquecimento global tornam intervenções urgentes, defendem pesquisadores.

Desastres naturais ocasionados pelo aquecimento global tornam urgente a ação de intervenções, segundo pesquisadores. Fonte:  Unsplash 

Dois pesos e duas medidas

Entre aquelas que se destacam, pode ser citada a ação empreendida por uma organização não governamental chamada SilverLining, que concedeu US$ 3 milhões a diversas universidades e outras instituições para que se dediquem à busca de respostas para questões práticas. Um exemplo é encontrar a altitude ideal para a aplicação de aerossóis e como inserir a quantidade mais indicada, verificando seus efeitos sobre a cadeia de produção de alimentos mundial.

Chris Sacca, cofundador da Lowercarbon Capital, um grupo de investimentos que é um dos financiadores da SilverLining, declarou em tom alarmista: “A descarbonização é necessária, mas vai demorar 20 anos ou mais para que ocorra. Se não explorarmos intervenções climáticas como a reflexão solar neste momento, condenaremos um número incontável de vidas, espécies e ecossistemas ao calor”.

Outra contemplada por somas substanciais foi a National Oceanic and Atmospheric Administration, que recebeu do congresso norte-americano US$ 4 milhões justamente para o desenvolvimento de tecnologias do tipo, assim como o monitoramento de uso secreto de tais soluções por outros países.

Douglas MacMartin, pesquisador de Engenharia Mecânica e aeroespacial na Universidade Cornell, afirmou que “é certo o poder da humanidade de resfriar as coisas, mas o que não está claro é o que vem a seguir”.

Se, por um lado, planeta pode ser resfriado artificialmente, por outro não se sabe o que virá.

Se, por um lado, o planeta pode ser resfriado artificialmente; por outro, não se sabe o que virá. Fonte:  Unsplash 

Existe uma maneira

Para esclarecer as possíveis consequências de intervenções dessa magnitude, MacMartin desenvolverá modelos de efeitos climáticos específicos oriundos da injeção de aerossóis na atmosfera acima de diferentes partes do globo e altitudes. “Dependendo de onde você colocar [a substância], terá efeitos diferentes nas monções na Ásia e no gelo marinho do Ártico“, ele apontou.

O Centro Nacional de Pesquisa Atmosférica em Boulder, Colorado, financiado também pela SilverLining, acredita ter o sistema ideal para isso — o qual é considerado o mais sofisticado do mundo. Com ele, serão executadas centenas de simulações e, assim, especialistas procurarão o que chamam de ponto ideal, no qual a quantidade de resfriamento artificial que pode reduzir eventos climáticos extremos não cause mudanças mais amplas nos padrões regionais de precipitação ou impactos semelhantes.

“Existe uma maneira, pelo menos em nosso modelo de mundo, de ver se podemos alcançar um sem acionar demais o outro?” questionou Jean-François Lamarque, diretor do laboratório de Clima e Dinâmica Global da instituição. Ainda não há resposta para essa dúvida, mas soluções sustentáveis estão sendo analisadas por pesquisadores australianos, que utilizariam a emissão de água salgada para tornar nuvens mais reflexivas, assim indicando resultados promissores de testes.

Dessa maneira, quem sabe as perdas de corais de recife que testemunhamos tenham data para acabar. Quanto ao resto, bem, só o tempo mostrará.

Rafael Muñoz: Brasil paga alto preço pela falta de política integrada de gestão de risco de desastres (Folha de S.Paulo)

www1.folha.uol.com.br – 20 de outubro de 2020

É urgente que se integrem tais questões às amplas políticas de desenvolvimento socioeconômico
Enchente em Itaoca, 2014. Fonte: Agência Brasil

Foi em janeiro de 2011 que o mito de que no Brasil não há desastre foi por terra. Chuvas torrenciais registradas na Região Serrana do Rio de Janeiro provocaram deslizamentos de terra e inundações, deixando um rastro de mais de mil mortos. O ocorrido mostrou a necessidade de priorização da agenda de riscos de desastres que fora, por muito tempo, secundária frente à falta de conhecimento dos reais impactos dos eventos naturais extremos na sociedade e economia brasileiras.

Nesse contexto, o Banco Mundial em parceria com a Sedec (Secretaria Nacional de Proteção e Defesa Civil) e a UFSC (Universidade Federal de Santa Catarina) conduziu uma análise detalhada de eventos de desastres passados demostrando a real dimensão do problema: entre 1995 e 2019, o Brasil perdeu em média mensalmente cerca de R$ 1,1 bilhão devido a desastres, ou seja, os prejuízos totais para o período são estimados em cerca de R$ 330 bilhões.

Desse total, 20% são perdas direitas (ou danos), a ampla maioria (59%) no setor de infraestrutura enquanto o de habitação responde por 37%. Já as perdas indiretas (ou prejuízos) correspondem a aproximadamente 80% do valor total dos impactos de desastre no país, mais marcantes na agricultura (R$ 149,8 bilhões) e pecuária (R$ 55,7 bilhões) pelo setor privado e água e transporte (R$ 31,9 bilhões) pelo setor público. Em relação aos impactos humanos, a conta é também significava: 4.065 mortes, 7,4 milhões de pessoas temporária ou permanentemente fora de suas casas devido a danos e mais de 276 milhões de pessoas afetadas.

Para além das perdas humanas e econômicas, as políticas públicas para a promoção de avanços socioeconômicos também podem ter sua eficácia reduzida dado que os eventos de desastres comprovadamente afetam indicadores de saúde, poder de compra, acesso a emprego e renda, educação, dentre outros. Investimentos vitais em infraestruturas críticas, como transportes e habitação, também são massivamente impactados devido a ocorrência de desastres.

Diante deste cenário, surge a inevitável pergunta: por que o Brasil ainda não tem uma política integrada de gestão de riscos de desastres e um Plano Nacional de Proteção e Defesa Civil? De forma a assegurar os tão necessários avanços, a atual gestão da Sedec definiu como prioridade a regulamentação da Lei 12.608/2012 que institui a Política Nacional de Proteção e Defesa Civil, bem como a formulação do Plano Nacional de Proteção e Defesa Civil.

Tais ações podem configurar um arcabouço legal e de diretrizes que venham a fomentar melhorias estruturais em políticas públicas. Por exemplo, no setor de habitação pode-se definir protocolos de incorporação de produtos de mapeamento de riscos em decisões de novos investimentos ou mitigação de riscos de desastres em projetos já entregues. No campo do planejamento fiscal, orçamentos mais condizentes com os impactos econômicos de desastres podem ser definidos no exercício de cada ano com vistas a melhor proteger a economia nacional e subnacional. Por fim, investimentos em infraestruturas críticas (por exemplo transportes, água e saneamento, geração e distribuição de energia) bem como manutenção das mesmas sob a ótica de exposição e vulnerabilidade a perigos naturais podem assegurar a continuidade da operação e de negócios em situações extremas, permitindo que serviços essenciais continuem a ser providos à população e que os impactos indiretos na economia sejam reduzidos.

Dado o aumento da frequência e impactos socioeconômicos dos eventos naturais extremos, existe consenso entre os especialistas que o processo de rápida urbanização favoreceu a criação de um cenário mais propício à ocorrência de desastres devido à ocupação inadequada do solo em áreas com perigos naturais e sem o devido tratamento de obras civis para gestão dos processos naturais. Ao mesmo tempo que esse processo levou a uma alta exposição de comunidades vulneráveis no território nacional e no momento em que analisamos os impactos da pandemia de Covid-19 em nossa economia e comunidades, não podemos deixar de considerar como os desastres vêm influenciando (negativamente) há muito tempo as políticas públicas em nosso país.

Felizmente avanços na coleta de dados e evidências permitem agora que os eventos de desastres e seus impactos estejam sob a luz do conhecimento técnico e em posse dos legisladores, administradores públicos e tomadores de decisões por meio de mapas de riscos, previsão de clima e tempo, modelos de inundações e deslizamentos, bem como fóruns de discussão e projetos de financiamento.

Nesse contexto, fica clara a necessidade de adaptação dos modelos de sucesso de gestão de riscos de desastres observados globalmente às características do Brasil. De forma geral, a extensão do território nacional, modelo federalista de administração pública, histórico de eventos de desastres de menor escala e alta frequência cumulativa, dentre outros, implica na necessidade de definição do papel da União e dos governos estaduais e municipais na agenda.

Assim, é urgente que se integre as questões de gestão de riscos de desastres às amplas políticas de desenvolvimento socioeconômico, tais como programas de habitação, planejamento e expansão urbana, investimentos em infraestruturas críticas, incentivos agropecuários, transferência de renda, entre outros.

Adicionalmente, há real oportunidade em se repensar processos de recuperação segundo a ótica de reconstrução melhor (em inglês, Build Back Better) de forma a assegurar que erros do passado não sejam repetidos gerando ou mantendo-se os patamares de riscos de desastres.

Esta coluna foi escrita em colaboração com Frederico Pedroso, especialista em Gestão de Riscos de Desastres do Banco Mundial, Joaquin Toro, especialista líder em Gestão de Riscos de Desastres do Banco Mundial e Rafael Schadeck, engenheiro civil e consultor em Gestão de Riscos de Desastres do Banco Mundial.

Science and Policy Collide During the Pandemic (The Scientist)

Science and Policy Collide During the Pandemic
ABOVE: MODIFIED FROM © istock.com, VASELENA
COVID-19 has laid bare some of the pitfalls of the relationship between scientific experts and policymakers—but some researchers say there are ways to make it better.

Diana Kwon

Sep 1, 2020

Science has taken center stage during the COVID-19 pandemic. Early on, as SARS-CoV-2 started spreading around the globe, many researchers pivoted to focus on studying the virus. At the same time, some scientists and science advisors—experts responsible for providing scientific information to policymakers—gained celebrity status as they calmly and cautiously updated the public on the rapidly evolving situation and lent their expertise to help governments make critical decisions, such as those relating to lockdowns and other transmission-slowing measures.

“Academia, in the case of COVID, has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible,” says Chris Tyler, the director of research and policy in University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP). 

But the pace at which COVID-related science has been conducted and disseminated during the pandemic has also revealed the challenges associated with translating fast-accumulating evidence for an audience not well versed in the process of science. As research findings are speedily posted to preprint servers, preliminary results have made headlines in major news outlets, sometimes without the appropriate dose of scrutiny.

Some politicians, such as Brazil’s President Jair Bolsonaro, have been quick to jump on premature findings, publicly touting the benefits of treatments such as hydroxychloroquine with minimal or no supporting evidence. Others have pointed to the flip-flopping of the current state of knowledge as a sign of scientists’ untrustworthiness or incompetence—as was seen, for example, in the backlash against Anthony Fauci, one of the US government’s top science advisors. 

Some comments from world leaders have been even more concerning. “For me, the most shocking thing I saw,” Tyler says, “was Donald Trump suggesting the injection of disinfectant as a way of treating COVID—that was an eye-popping, mind-boggling moment.” 

Still, Tyler notes that there are many countries in which the relationship between the scientific community and policymakers during the course of the pandemic has been “pretty impressive.” As an example, he points to Germany, where the government has both enlisted and heeded the advice of scientists across a range of disciplines, including epidemiology, virology, economics, public health, and the humanities.

Researchers will likely be assessing the response to the pandemic for years to come. In the meantime, for scientists interested in getting involved in policymaking, there are lessons to be learned, as well some preliminary insights from the pandemic that may help to improve interactions between scientists and policymakers and thereby pave the way to better evidence-based policy. 

Cultural divisions between scientists and policymakers

Even in the absence of a public-health emergency, there are several obstacles to the smooth implementation of scientific advice into policy. One is simply that scientists and policymakers are generally beholden to different incentive systems. “Classically, a scientist wants to understand something for the sake of understanding, because they have a passion toward that topic—so discovery is driven by the value of discovery,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Whereas the policymaker has a much more utilitarian approach. . . . They have to come up with interventions that produce the best outcomes for the most people.”

Scientists and policymakers are operating on considerably different timescales, too. “Normally, research programs take months and years, whereas policy decisions take weeks and months, sometimes days,” Tyler says. “This discrepancy makes it much more difficult to get scientifically generated knowledge into the policymaking process.” Tyler adds that the two groups deal with uncertainty in very different ways: academics are comfortable with it, as measuring uncertainty is part of the scientific process, whereas policymakers tend to view it as something that can cloud what a “right” answer might be. 

This cultural mismatch has been particularly pronounced during the COVID-19 pandemic. Even as scientists work at breakneck speeds, many crucial questions about COVID-19—such as how long immunity to the virus lasts, and how much of a role children play in the spread of infection—remain unresolved, and policy decisions have had to be addressed with limited evidence, with advice changing as new research emerges. 

“We have seen the messy side of science, [that] not all studies are equally well-done and that they build over time to contribute to the weight of knowledge,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. “The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions.” 

Academia has done an amazing job of trying to get as much information  relevant to COVID gathered and distributed into the policymaking process as possible. —Chris Tyler, University College London

Widespread mask use, for example, was initially discouraged by many politicians and public health officials due to concerns about a shortage of supplies for healthcare workers and limited data on whether mask use by the general public would help reduce the spread of the virus. At the time, there were few mask-wearing laws outside of East Asia, where such practices were commonplace long before the COVID-19 pandemic began.  

Gradually, however, as studies began to provide evidence to support the use of face coverings as a means of stemming transmission, scientists and public health officials started to recommend their use. This shift led local, state, and federal officials around the world to implement mandatory mask-wearing rules in certain public spaces. Some politicians, however, used this about-face in advice as a reason to criticize health experts.  

“We’re dealing with evidence that is changing very rapidly,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “I think there’s a risk of people perceiving that rapid evolution as science [being] a bad process, which is worrisome.” On the other hand, the spotlight the pandemic has put on scientists provides opportunities to educate the general public and policymakers about the scientific process, Azad adds. It’s important to help them understand that “it’s good that things are changing, because it means we’re paying attention to the new evidence as it comes out.”

Bringing science and policy closer together

Despite these challenges, science and policy experts say that there are both short- and long-term ways to improve the relationship between the two communities and to help policymakers arrive at decisions that are more evidence-based.

Better tools, for one, could help close the gap. Earlier this year, Ruggeri brought together a group of people from a range of disciplines, including medicine, engineering, economics, and policy, to develop the Theoretical, Empirical, Applicable, Replicable, Impact (THEARI) rating system, a five-tiered framework for evaluating the robustness of scientific evidence in the context of policy decisions. The ratings range from “theoretical” (the lowest level, where a scientifically viable idea has been proposed but not tested) to “impact” (the highest level, in which a concept has been successfully tested, replicated, applied, and validated in the real world).

The team developed THEARI partly to establish a “common language” across scientific disciplines, which Ruggeri says would be particularly useful to policymakers evaluating evidence from a field they may know little about. Ruggeri hopes to see the THEARI framework—or something like it—adopted by policymakers and policy advisors, and even by journals and preprint servers. “I don’t necessarily think [THEARI] will be used right away,” he says. “It’d be great if it was, but we . . . [developed] it as kind of a starting point.” 

Other approaches to improve the communication between scientists and policymakers may require more resources and time. According to Akerlof, one method could include providing better incentives for both parties to engage with each other—by offering increased funding for academics who take part in this kind of activity, for instance—and boosting opportunities for such interactions to happen. 

Akerlof points to the American Association for the Advancement of Science’s Science & Technology Policy Fellowships, which place scientists and engineers in various branches of the US government for a year, as an example of a way in which important ties between the two communities could be forged. “Many of those scientists either stay in government or continue to work in science policy in other organizations,” Akerlof says. “By understanding the language and culture of both the scientific and policy communities, they are able to bridge between them.”  

In Canada, such a program was established in 2018, when the Canadian Science Policy Center and Mona Nemer, Canada’s Chief Science Advisor, held the country’s first “Science Meets Parliament” event. The 28 scientists in attendance, including Azad, spent two days learning about effective communication and the policymaking process, and interacting with senators and members of parliament. “It was eye opening for me because I didn’t know how parliamentarians really live and work,” Azad says. “We hope it’ll grow and involve more scientists and continue on an annual basis . . . and also happen at the provincial level.”

The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions. —Karen Akerlof, George Mason University

There may also be insights from scientist-policymaker exchanges in other domains that experts can apply to the current pandemic. Maria Carmen Lemos, a social scientist focused on climate policy at the University of Michigan, says that one way to make those interactions more productive is by closing something she calls the “usability gap.”

“The usability gap highlights the fact that one of the reasons that research fails to connect is because [scientists] only pay attention to the [science],” Lemos explains. “We are putting everything out there in papers, in policy briefs, in reports, but rarely do we actually systematically and intentionally try to understand who is on the other side” receiving this information, and what they will do with it.

The way to deal with this usability gap, according to Lemos, is for more scientists to consult the people who actually make, influence, and implement policy changes early on in the scientific process. Lemos and her team, for example, have engaged in this way with city officials, farmers, forest managers, tribal leaders, and others whose decision making would directly benefit from their work. “We help with organization and funding, and we also work with them very closely to produce climate information that is tailored for them, for the problems that they are trying to solve,” she adds. 

Azad applied this kind of approach in a study that involves assessing the effects of the pandemic on a cohort of children that her team has been following from infancy, starting in 2010. When she and her colleagues were putting together the proposal for the COVID-19 project this year, they reached out to public health decision makers across the Canadian provinces to find out what information would be most useful. “We have made sure to embed those decision makers in the project from the very beginning to ensure we’re asking the right questions, getting the most useful information, and getting it back to them in a very quick turnaround manner,” Azad says. 

There will also likely be lessons to take away from the pandemic in the years to come, notes Noam Obermeister, a PhD student studying science policy at the University of Cambridge. These include insights from scientific advisors about how providing guidance to policymakers during COVID-19 compared to pre-pandemic times, and how scientists’ prominent role during the pandemic has affected how they are viewed by the public; efforts to collect this sort of information are already underway. 

“I don’t think scientists anticipated that much power and visibility, or that [they] would be in [public] saying science is complicated and uncertain,” Obermeister says. “I think what that does to the authority of science in the public eye is still to be determined.”

Talking Science to PolicymakersFor academics who have never engaged with policymakers, the thought of making contact may be daunting. Researchers with experience of these interactions share their tips for success.
1. Do your homework. Policymakers usually have many different people vying for their time and attention. When you get a meeting, make sure you make the most of it. “Find out which issues related to your research are a priority for the policymaker and which decisions are on the horizon,” says Karen Akerlof, a professor of environmental science and policy at George Mason University.
2. Get to the point, but don’t oversimplify. “I find policymakers tend to know a lot about the topics they work on, and when they don’t, they know what to ask about,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Finding a good balance in the communication goes a long way.”
3. Keep in mind that policymakers’ expertise differs from that of scientists. “Park your ego at the door and treat policymakers and their staff with respect,” Akerlof says. “Recognize that the skills, knowledge, and culture that translate to success in policy may seem very different than those in academia.” 
4. Be persistent. “Don’t be discouraged if you don’t get a response immediately, or if promising communications don’t pan out,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “Policymakers are busy and their attention shifts rapidly. Meetings get cancelled. It’s not personal. Keep trying.”
5. Remember that not all policymakers are politicians, and vice versa. Politicians are usually elected and are affiliated with a political party, and they may not always be directly involved in creating new policies. This is not the case for the vast majority of policymakers—most are career civil servants whose decisions impact the daily living of constituents, Ruggeri explains. 

A Supercomputer Analyzed Covid-19 — and an Interesting New Theory Has Emerged (Medium/Elemental)

A closer look at the Bradykinin hypothesis

Thomas Smith, Sept 1, 2020

Original article

3d rendering of multiple coronavirus.
Photo: zhangshuang/Getty Images

Earlier this summer, the Summit supercomputer at Oak Ridge National Lab in Tennessee set about crunching data on more than 40,000 genes from 17,000 genetic samples in an effort to better understand Covid-19. Summit is the second-fastest computer in the world, but the process — which involved analyzing 2.5 billion genetic combinations — still took more than a week.

When Summit was done, researchers analyzed the results. It was, in the words of Dr. Daniel Jacobson, lead researcher and chief scientist for computational systems biology at Oak Ridge, a “eureka moment.” The computer had revealed a new theory about how Covid-19 impacts the body: the bradykinin hypothesis. The hypothesis provides a model that explains many aspects of Covid-19, including some of its most bizarre symptoms. It also suggests 10-plus potential treatments, many of which are already FDA approved. Jacobson’s group published their results in a paper in the journal eLife in early July.

According to the team’s findings, a Covid-19 infection generally begins when the virus enters the body through ACE2 receptors in the nose, (The receptors, which the virus is known to target, are abundant there.) The virus then proceeds through the body, entering cells in other places where ACE2 is also present: the intestines, kidneys, and heart. This likely accounts for at least some of the disease’s cardiac and GI symptoms.

But once Covid-19 has established itself in the body, things start to get really interesting. According to Jacobson’s group, the data Summit analyzed shows that Covid-19 isn’t content to simply infect cells that already express lots of ACE2 receptors. Instead, it actively hijacks the body’s own systems, tricking it into upregulating ACE2 receptors in places where they’re usually expressed at low or medium levels, including the lungs.

In this sense, Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they don’t just take your stuff — they also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.

The renin–angiotensin system (RAS) controls many aspects of the circulatory system, including the body’s levels of a chemical called bradykinin, which normally helps to regulate blood pressure. According to the team’s analysis, when the virus tweaks the RAS, it causes the body’s mechanisms for regulating bradykinin to go haywire. Bradykinin receptors are resensitized, and the body also stops effectively breaking down bradykinin. (ACE normally degrades bradykinin, but when the virus downregulates it, it can’t do this as effectively.)

The end result, the researchers say, is to release a bradykinin storm — a massive, runaway buildup of bradykinin in the body. According to the bradykinin hypothesis, it’s this storm that is ultimately responsible for many of Covid-19’s deadly effects. Jacobson’s team says in their paper that “the pathology of Covid-19 is likely the result of Bradykinin Storms rather than cytokine storms,” which had been previously identified in Covid-19 patients, but that “the two may be intricately linked.” Other papers had previously identified bradykinin storms as a possible cause of Covid-19’s pathologies.

Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house.

As bradykinin builds up in the body, it dramatically increases vascular permeability. In short, it makes your blood vessels leaky. This aligns with recent clinical data, which increasingly views Covid-19 primarily as a vascular disease, rather than a respiratory one. But Covid-19 still has a massive effect on the lungs. As blood vessels start to leak due to a bradykinin storm, the researchers say, the lungs can fill with fluid. Immune cells also leak out into the lungs, Jacobson’s team found, causing inflammation.

And Covid-19 has another especially insidious trick. Through another pathway, the team’s data shows, it increases production of hyaluronic acid (HLA) in the lungs. HLA is often used in soaps and lotions for its ability to absorb more than 1,000 times its weight in fluid. When it combines with fluid leaking into the lungs, the results are disastrous: It forms a hydrogel, which can fill the lungs in some patients. According to Jacobson, once this happens, “it’s like trying to breathe through Jell-O.”

This may explain why ventilators have proven less effective in treating advanced Covid-19 than doctors originally expected, based on experiences with other viruses. “It reaches a point where regardless of how much oxygen you pump in, it doesn’t matter, because the alveoli in the lungs are filled with this hydrogel,” Jacobson says. “The lungs become like a water balloon.” Patients can suffocate even while receiving full breathing support.

The bradykinin hypothesis also extends to many of Covid-19’s effects on the heart. About one in five hospitalized Covid-19 patients have damage to their hearts, even if they never had cardiac issues before. Some of this is likely due to the virus infecting the heart directly through its ACE2 receptors. But the RAS also controls aspects of cardiac contractions and blood pressure. According to the researchers, bradykinin storms could create arrhythmias and low blood pressure, which are often seen in Covid-19 patients.

The bradykinin hypothesis also accounts for Covid-19’s neurological effects, which are some of the most surprising and concerning elements of the disease. These symptoms (which include dizziness, seizures, delirium, and stroke) are present in as many as half of hospitalized Covid-19 patients. According to Jacobson and his team, MRI studies in France revealed that many Covid-19 patients have evidence of leaky blood vessels in their brains.

Bradykinin — especially at high doses — can also lead to a breakdown of the blood-brain barrier. Under normal circumstances, this barrier acts as a filter between your brain and the rest of your circulatory system. It lets in the nutrients and small molecules that the brain needs to function, while keeping out toxins and pathogens and keeping the brain’s internal environment tightly regulated.

If bradykinin storms cause the blood-brain barrier to break down, this could allow harmful cells and compounds into the brain, leading to inflammation, potential brain damage, and many of the neurological symptoms Covid-19 patients experience. Jacobson told me, “It is a reasonable hypothesis that many of the neurological symptoms in Covid-19 could be due to an excess of bradykinin. It has been reported that bradykinin would indeed be likely to increase the permeability of the blood-brain barrier. In addition, similar neurological symptoms have been observed in other diseases that result from an excess of bradykinin.”

Increased bradykinin levels could also account for other common Covid-19 symptoms. ACE inhibitors — a class of drugs used to treat high blood pressure — have a similar effect on the RAS system as Covid-19, increasing bradykinin levels. In fact, Jacobson and his team note in their paper that “the virus… acts pharmacologically as an ACE inhibitor” — almost directly mirroring the actions of these drugs.

By acting like a natural ACE inhibitor, Covid-19 may be causing the same effects that hypertensive patients sometimes get when they take blood pressure–lowering drugs. ACE inhibitors are known to cause a dry cough and fatigue, two textbook symptoms of Covid-19. And they can potentially increase blood potassium levels, which has also been observed in Covid-19 patients. The similarities between ACE inhibitor side effects and Covid-19 symptoms strengthen the bradykinin hypothesis, the researchers say.

ACE inhibitors are also known to cause a loss of taste and smell. Jacobson stresses, though, that this symptom is more likely due to the virus “affecting the cells surrounding olfactory nerve cells” than the direct effects of bradykinin.

Though still an emerging theory, the bradykinin hypothesis explains several other of Covid-19’s seemingly bizarre symptoms. Jacobson and his team speculate that leaky vasculature caused by bradykinin storms could be responsible for “Covid toes,” a condition involving swollen, bruised toes that some Covid-19 patients experience. Bradykinin can also mess with the thyroid gland, which could produce the thyroid symptoms recently observed in some patients.

The bradykinin hypothesis could also explain some of the broader demographic patterns of the disease’s spread. The researchers note that some aspects of the RAS system are sex-linked, with proteins for several receptors (such as one called TMSB4X) located on the X chromosome. This means that “women… would have twice the levels of this protein than men,” a result borne out by the researchers’ data. In their paper, Jacobson’s team concludes that this “could explain the lower incidence of Covid-19 induced mortality in women.” A genetic quirk of the RAS could be giving women extra protection against the disease.

The bradykinin hypothesis provides a model that “contributes to a better understanding of Covid-19” and “adds novelty to the existing literature,” according to scientists Frank van de Veerdonk, Jos WM van der Meer, and Roger Little, who peer-reviewed the team’s paper. It predicts nearly all the disease’s symptoms, even ones (like bruises on the toes) that at first appear random, and further suggests new treatments for the disease.

As Jacobson and team point out, several drugs target aspects of the RAS and are already FDA approved to treat other conditions. They could arguably be applied to treating Covid-19 as well. Several, like danazol, stanozolol, and ecallantide, reduce bradykinin production and could potentially stop a deadly bradykinin storm. Others, like icatibant, reduce bradykinin signaling and could blunt its effects once it’s already in the body.

Interestingly, Jacobson’s team also suggests vitamin D as a potentially useful Covid-19 drug. The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN. Again, this could stop potentially deadly bradykinin storms from forming. The researchers note that vitamin D has already been shown to help those with Covid-19. The vitamin is readily available over the counter, and around 20% of the population is deficient. If indeed the vitamin proves effective at reducing the severity of bradykinin storms, it could be an easy, relatively safe way to reduce the severity of the virus.

Other compounds could treat symptoms associated with bradykinin storms. Hymecromone, for example, could reduce hyaluronic acid levels, potentially stopping deadly hydrogels from forming in the lungs. And timbetasin could mimic the mechanism that the researchers believe protects women from more severe Covid-19 infections. All of these potential treatments are speculative, of course, and would need to be studied in a rigorous, controlled environment before their effectiveness could be determined and they could be used more broadly.

Covid-19 stands out for both the scale of its global impact and the apparent randomness of its many symptoms. Physicians have struggled to understand the disease and come up with a unified theory for how it works. Though as of yet unproven, the bradykinin hypothesis provides such a theory. And like all good hypotheses, it also provides specific, testable predictions — in this case, actual drugs that could provide relief to real patients.

The researchers are quick to point out that “the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.” As to the next step in the process, Jacobson is clear: “We have to get this message out.” His team’s finding won’t cure Covid-19. But if the treatments it points to pan out in the clinic, interventions guided by the bradykinin hypothesis could greatly reduce patients’ suffering — and potentially save lives.

The Biblical Flood That Will Drown California (Wired)

Tom Philpott, 08.29.20 8:00 AM

The Great Flood of 1861–1862 was a preview of what scientists expect to see again, and soon.

This story originally appeared on Mother Jones and is part of the Climate Desk collaboration.

In November 1860, a young scientist from upstate New York named William Brewer disembarked in San Francisco after a long journey that took him from New York City through Panama and then north along the Pacific coast. “The weather is perfectly heavenly,” he enthused in a letter to his brother back east. The fast-growing metropolis was already revealing the charms we know today: “large streets, magnificent buildings” adorned by “many flowers we [northeasterners] see only in house cultivations: various kinds of geraniums growing of immense size, dew plant growing like a weed, acacia, fuchsia, etc. growing in the open air.”

Flowery prose aside, Brewer was on a serious mission. Barely a decade after being claimed as a US state, California was plunged in an economic crisis. The gold rush had gone bust, and thousands of restive settlers were left scurrying about, hot after the next ever-elusive mineral bonanza. The fledgling legislature had seen fit to hire a state geographer to gauge the mineral wealth underneath its vast and varied terrain, hoping to organize and rationalize the mad lunge for buried treasure. The potential for boosting agriculture as a hedge against mining wasn’t lost on the state’s leaders. They called on the state geographer to deliver a “full and scientific description of the state’s rocks, fossils, soils, and minerals, and its botanical and zoological productions, together with specimens of same.”

The task of completing the fieldwork fell to the 32-year-old Brewer, a Yale-trained botanist who had studied cutting-edge agricultural science in Europe. His letters home, chronicling his four-year journey up and down California, form one of the most vivid contemporary accounts of its early statehood.

They also provide a stark look at the greatest natural disaster known to have befallen the western United States since European contact in the 16th century: the Great Flood of 1861–1862. The cataclysm cut off telegraph communication with the East Coast, swamped the state’s new capital, and submerged the entire Central Valley under as much as 15 feet of water. Yet in modern-day California—a region that author Mike Davis once likened to a “Book of the Apocalypse theme park,” where this year’s wildfires have already burned 1.4 million acres, and dozens of fires are still raging—the nearly forgotten biblical-scale flood documented by Brewer’s letters has largely vanished from the public imagination, replaced largely by traumatic memories of more recent earthquakes.

When it was thought of at all, the flood was once considered a thousand-year anomaly, a freak occurrence. But emerging science demonstrates that floods of even greater magnitude occurred every 100 to 200 years in California’s precolonial history. Climate change will make them more frequent still. In other words, the Great Flood was a preview of what scientists expect to see again, and soon. And this time, given California’s emergence as agricultural and economic powerhouse, the effects will be all the more devastating.

Barely a year after Brewer’s sunny initial descent from a ship in San Francisco Bay, he was back in the city, on a break. In a November 1861 letter home, he complained of a “week of rain.” In his next letter, two months later, Brewer reported jaw-dropping news: Rain had fallen almost continuously since he had last written—and now the entire Central Valley was underwater. “Thousands of farms are entirely underwater—cattle starving and drowning.”

Picking up the letter nine days later, he wrote that a bad situation had deteriorated. All the roads in the middle of the state are “impassable, so all mails are cut off.” Telegraph service, which had only recently been connected to the East Coast through the Central Valley, stalled. “The tops of the poles are under water!” The young state’s capital city, Sacramento, about 100 miles northeast of San Francisco at the western edge of the valley and the intersection of two rivers, was submerged, forcing the legislature to evacuate—and delaying a payment Brewer needed to forge ahead with his expedition.

The surveyor gaped at the sheer volume of rain. In a normal year, Brewer reported, San Francisco received about 20 inches. In the 10 weeks leading up to January 18, 1862, the city got “thirty-two and three-quarters inches and it is still raining!”

Brewer went on to recount scenes from the Central Valley that would fit in a Hollywood disaster epic. “An old acquaintance, a buccaro [cowboy], came down from a ranch that was overflowed,” he wrote. “The floor of their one-story house was six weeks under water before the house went to pieces.” Steamboats “ran back over the ranches fourteen miles from the [Sacramento] river, carrying stock [cattle], etc., to the hills,” he reported. He marveled at the massive impromptu lake made up of “water ice cold and muddy,” in which “winds made high waves which beat the farm homes in pieces.” As a result, “every house and farm over this immense region is gone.”

Eventually, in March, Brewer made it to Sacramento, hoping (without success) to lay hands on the state funds he needed to continue his survey. He found a city still in ruins, weeks after the worst of the rains. “Such a desolate scene I hope never to see again,” he wrote: “Most of the city is still under water, and has been for three months … Every low place is full—cellars and yards are full, houses and walls wet, everything uncomfortable.” The “better class of houses” were in rough shape, Brewer observed, but “it is with the poorer classes that this is the worst.” He went on: “Many of the one-story houses are entirely uninhabitable; others, where the floors are above the water are, at best, most wretched places in which to live.” He summarized the scene:

Many houses have partially toppled over; some have been carried from their foundations, several streets (now avenues of water) are blocked up with houses that have floated in them, dead animals lie about here and there—a dreadful picture. I don’t think the city will ever rise from the shock, I don’t see how it can.

Brewer’s account is important for more than just historical interest. In the 160 years since the botanist set foot on the West Coast, California has transformed from an agricultural backwater to one of the jewels of the US food system. The state produces nearly all of the almonds, walnuts, and pistachios consumed domestically; 90 percent or more of the broccoli, carrots, garlic, celery, grapes, tangerines, plums, and artichokes; at least 75 percent of the cauliflower, apricots, lemons, strawberries, and raspberries; and more than 40 percent of the lettuce, cabbage, oranges, peaches, and peppers.

And as if that weren’t enough, California is also a national hub for milk production. Tucked in amid the almond groves and vegetable fields are vast dairy operations that confine cows together by the thousands and produce more than a fifth of the nation’s milk supply, more than any other state. It all amounts to a food-production juggernaut: California generates $46 billion worth of food per year, nearly double the haul of its closest competitor among US states, the corn-and-soybean behemoth Iowa.

You’ve probably heard that ever-more more frequent and severe droughts threaten the bounty we’ve come to rely on from California. Water scarcity, it turns out, isn’t the only menace that stalks the California valleys that stock our supermarkets. The opposite—catastrophic flooding—also occupies a niche in what Mike Davis, the great chronicler of Southern California’s sociopolitical geography, has called the state’s “ecology of fear.” Indeed, his classic book of that title opens with an account of a 1995 deluge that saw “million-dollar homes tobogganed off their hill-slope perches” and small children and pets “sucked into the deadly vortices of the flood channels.”

Yet floods tend to be less feared than rival horsemen of the apocalypse in the state’s oft-stimulated imagination of disaster. The epochal 2011–2017 drought, with its missing-in-action snowpacks and draconian water restrictions, burned itself into the state’s consciousness. Californians are rightly terrified of fires like the ones that roared through the northern Sierra Nevada foothills and coastal canyons near Los Angeles in the fall of 2018, killing nearly 100 people and fouling air for miles around, or the current LNU Lightning Complex fire that has destroyed nearly 1,000 structures and killed five people in the region between Sacramento and San Francisco. Many people are frightfully aware that a warming climate will make such conflagrations increasingly frequent. And “earthquake kits” are common gear in closets and garages all along the San Andreas Fault, where the next Big One lurks. Floods, though they occur as often in Southern and Central California as they do anywhere in the United States, don’t generate quite the same buzz.

But a growing body of research shows there’s a flip side to the megadroughts Central Valley farmers face: megafloods. The region most vulnerable to such a water-drenched cataclysm in the near future is, ironically enough, the California’s great arid, sinking food production basin, the beleaguered behemoth of the US food system: the Central Valley. Bordered on all sides by mountains, the Central Valley stretches 450 miles long, is on average 50 miles wide, and occupies a land mass of 18,000 square miles, or 11.5 million acres—roughly equivalent in size to Massachusetts and Vermont combined. Wedged between the Sierra Nevada to the east and the Coast Ranges to the west, it’s one of the globe’s greatest expanses of fertile soil and temperate weather. For most Americans, it’s easy to ignore the Central Valley, even though it’s as important to eaters as Hollywood is to moviegoers or Silicon Valley is to smartphone users. Occupying less than 1 percent of US farmland, the Central Valley churns out a quarter of the nation’s food supply.

At the time of the Great Flood, the Central Valley was still mainly cattle ranches, the farming boom a ways off. Late in 1861, the state suddenly emerged from a two-decade dry spell when monster storms began lashing the West Coast from Baja California to present-day Washington state. In central California, the deluge initially took the form of 10 to 15 feet of snow dumped onto the Sierra Nevada, according to research by the UC Berkeley paleoclimatologist B. Lynn Ingram and laid out in her 2015 book, The West Without Water, cowritten with Frances Malamud-Roam. Ingram has emerged as a kind of Cassandra of drought and flood risks in the western United States. Soon after the blizzards came days of warm, heavy rain, which in turn melted the enormous snowpack. The resulting slurry cascaded through the Central Valley’s network of untamed rivers.

As floodwater gathered in the valley, it formed a vast, muddy, wind-roiled lake, its size “rivaling that of Lake Superior,” covering the entire Central Valley floor, from the southern slopes of the Cascade Mountains near the Oregon border to the Tehachapis, south of Bakersfield, with depths in some places exceeding 15 feet.

At least some of the region’s remnant indigenous population saw the epic flood coming and took precautions to escape devastation, Ingram reports, quoting an item in the Nevada City Democrat on January 11, 1862:

We are informed that the Indians living in the vicinity of Marysville left their abodes a week or more ago for the foothills predicting an unprecedented overflow. They told the whites that the water would be higher than it has been for thirty years, and pointed high up on the trees and houses where it would come. The valley Indians have traditions that the water occasionally rises 15 or 20 feet higher than it has been at any time since the country was settled by whites, and as they live in the open air and watch closely all the weather indications, it is not improbable that they may have better means than the whites of anticipating a great storm.

All in all, thousands of people died, “one-third of the state’s property was destroyed, and one home in eight was destroyed completely or carried away by the floodwaters.” As for farming, the 1862 megaflood transformed valley agriculture, playing a decisive role in creating today’s Anglo-dominated, crop-oriented agricultural powerhouse: a 19th-century example of the “disaster capitalism” that Naomi Klein describes in her 2007 book, The Shock Doctrine.

Prior to the event, valley land was still largely owned by Mexican rancheros who held titles dating to Spanish rule. The 1848 Treaty of Guadalupe Hidalgo, which triggered California’s transfer from Mexican to US control, gave rancheros US citizenship and obligated the new government to honor their land titles. The treaty terms met with vigorous resentment from white settlers eager to shift from gold mining to growing food for the new state’s burgeoning cities. The rancheros thrived during the gold rush, finding a booming market for beef in mining towns. By 1856, their fortunes had shifted. A severe drought that year cut production, competition from emerging US settler ranchers meant lower prices, and punishing property taxes—imposed by land-poor settler politicians—caused a further squeeze. “As a result, rancheros began to lose their herds, their land, and their homes,” writes the historian Lawrence James Jelinek.

The devastation of the 1862 flood, its effects magnified by a brutal drought that started immediately afterward and lasted through 1864, “delivered the final blow,” Jelinek writes. Between 1860 and 1870, California’s cattle herd, concentrated in the valley, plunged from 3 million to 630,000. The rancheros were forced to sell their land to white settlers at pennies per acre, and by 1870 “many rancheros had become day laborers in the towns,” Jelinek reports. The valley’s emerging class of settler farmers quickly turned to wheat and horticultural production and set about harnessing and exploiting the region’s water resources, both those gushing forth from the Sierra Nevada and those beneath their feet.

Despite all the trauma it generated and the agricultural transformation it cemented in the Central Valley, the flood quickly faded from memory in California and the broader United States. To his shocked assessment of a still-flooded and supine Sacramento months after the storm, Brewer added a prophetic coda:

No people can so stand calamity as this people. They are used to it. Everyone is familiar with the history of fortunes quickly made and as quickly lost. It seems here more than elsewhere the natural order of things. I might say, indeed, that the recklessness of the state blunts the keener feelings and takes the edge from this calamity.

Indeed, the new state’s residents ended up shaking off the cataclysm. What lesson does the Great Flood of 1862 hold for today? The question is important. Back then, just around 500,000 people lived in the entire state, and the Central Valley was a sparsely populated badland. Today, the valley has a population of 6.5 million people and boasts the state’s three fastest-growing counties. Sacramento (population 501,344), Fresno (538,330), and Bakersfield (386,839) are all budding metropolises. The state’s long-awaited high-speed train, if it’s ever completed, will place Fresno residents within an hour of Silicon Valley, driving up its appeal as a bedroom community.

In addition to the potentially vast human toll, there’s also the fact that the Central Valley has emerged as a major linchpin of the US and global food system. Could it really be submerged under fifteen feet of water again—and what would that mean?

In less than two centuries as a US state, California has maintained its reputation as a sunny paradise while also enduring the nation’s most erratic climate: the occasional massive winter storm roaring in from the Pacific; years-long droughts. But recent investigations into the fossil record show that these past years have been relatively stable.

One avenue of this research is the study of the regular megadroughts, the most recent of which occurred just a century before Europeans made landfall on the North American west coast. As we are now learning, those decades-long arid stretches were just as regularly interrupted by enormous storms—many even grander than the one that began in December 1861. (Indeed, that event itself was directly preceded and followed by serious droughts.) In other words, the same patterns that make California vulnerable to droughts also make it ripe for floods.

Beginning in the 1980s, scientists including B. Lynn Ingram began examining streams and banks in the enormous delta network that together serve as the bathtub drain through which most Central Valley runoff has flowed for millennia, reaching the ocean at the San Francisco Bay. (Now-vanished Tulare Lake gathered runoff in the southern part of the valley.) They took deep-core samples from river bottoms, because big storms that overflow the delta’s banks transfer loads of soil and silt from the Sierra Nevada and deposit a portion of it in the Delta. They also looked at fluctuations in old plant material buried in the sediment layers. Plant species that thrive in freshwater suggest wet periods, as heavy runoff from the mountains crowds out seawater. Salt-tolerant species denote dry spells, as sparse mountain runoff allows seawater to work into the delta.

What they found was stunning. The Great Flood of 1862 was no one-off black-swan event. Summarizing the science, Ingram and USGS researcher Michael Dettinger deliver the dire news: A flood comparable to—and sometimes much more intense than—the 1861–1862 catastrophe occurred sometime between 1235–1360, 1395–1410, 1555–1615, 1750–1770, and 1810–1820; “that is, one megaflood every 100 to 200 years.” They also discovered that the 1862 flood didn’t appear in the sediment record in some sites that showed evidence of multiple massive events—suggesting that it was actually smaller than many of the floods that have inundated California over the centuries.

During its time as a US food-production powerhouse, California has been known for its periodic droughts and storms. But Ingram and Dettinger’s work pulls the lens back to view the broader timescale, revealing the region’s swings between megadroughts and megastorms—ones more than severe enough to challenge concentrated food production, much less dense population centers.

The dynamics of these storms themselves explain why the state is also prone to such swings. Meteorologists have known for decades that those tempests that descend upon California over the winter—and from which the state receives the great bulk of its annual precipitation—carry moisture from the South Pacific. In the late 1990s, scientists discovered that these “pineapple expresses,” as TV weather presenters call them, are a subset of a global weather phenomenon: long, wind-driven plumes of vapor about a mile above the sea that carry moisture from warm areas near the equator on a northeasterly path to colder, drier regions toward the poles. They carry so much moisture—often more than 25 times the flow of the Mississippi River, over thousands of miles—that they’ve been dubbed “atmospheric rivers.”

In a pioneering 1998 paper, researchers Yong Zhu and Reginald E. Newell found that nearly all the vapor transport between the subtropics (regions just south or north of the equator, depending on the hemisphere) toward the poles occurred in just five or six narrow bands. And California, it turns out, is the prime spot in the western side of the northern hemisphere for catching them at full force during the winter months.

As Ingram and Dettinger note, atmospheric rivers are the primary vector for California’s floods. That includes pre-Columbian cataclysms as well as the Great Flood of 1862, all the way to the various smaller ones that regularly run through the state. Between 1950 and 2010, Ingram and Dettinger write, atmospheric rivers “caused more than 80 percent of flooding in California rivers and 81 percent of the 128 most well-documented levee breaks in California’s Central Valley.”

Paradoxically, they are at least as much a lifeblood as a curse. Between eight and 11 atmospheric rivers hit California every year, the great majority of them doing no major damage, and they deliver between 30 and 50 percent of the state’s rain and snow. But the big ones are damaging indeed. Other researchers are reaching similar conclusions. In a study released in December 2019, a team from the US Army Corps of Engineers and the Scripps Institution of Oceanography found that atmospheric-river storms accounted for 84 percent of insured flood damages in the western United States between 1978 and 2017; the 13 biggest storms wrought more than half the damage.

So the state—and a substantial portion of our food system—exists on a razor’s edge between droughts and floods, its annual water resources decided by massive, increasingly fickle transfers of moisture from the South Pacific. As Dettinger puts it, the “largest storms in California’s precipitation regime not only typically end the state’s frequent droughts, but their fluctuations also cause those droughts in the first place.”

We know that before human civilization began spewing millions of tons of greenhouse gases into the atmosphere annually, California was due “one megaflood every 100 to 200 years”—and the last one hit more than a century and a half ago. What happens to this outlook when you heat up the atmosphere by 1 degree Celsius—and are on track to hit at least another half-degree Celsius increase by midcentury?

That was the question posed by Daniel Swain and a team of researchers at UCLA’s Department of Atmospheric and Oceanic Sciences in a series of studies, the first of which was published in 2018. They took California’s long pattern of droughts and floods and mapped it onto the climate models based on data specific to the region, looking out to century’s end.

What they found isn’t comforting. As the tropical Pacific Ocean and the atmosphere just above it warm, more seawater evaporates, feeding ever bigger atmospheric rivers gushing toward the California coast. As a result, the potential for storms on the scale of the ones that triggered the Great Flood has increased “more than threefold,” they found. So an event expected to happen on average every 200 years will now happen every 65 or so. It is “more likely than not we will see one by 2060,” and it could plausibly happen again before century’s end, they concluded.

As the risk of a catastrophic event increases, so will the frequency of what they call “precipitation whiplash”: extremely wet seasons interrupted by extremely dry ones, and vice versa. The winter of 2016–2017 provides a template. That year, a series of atmospheric-river storms filled reservoirs and at one point threatened a major flood in the northern Central Valley, abruptly ending the worst multiyear drought in the state’s recorded history.

Swings on that magnitude normally occur a handful of times each century, but in the model by Swain’s team, “it goes from something that happens maybe once in a generation to something that happens two or three times,” he told me in an interview. “Setting aside a repeat of 1862, these less intense events could still seriously test the limits of our water infrastructure.” Like other efforts to map climate change onto California’s weather, this one found that drought years characterized by low winter precipitation would likely increase—in this case, by a factor of as much as two, compared with mid-20th-century patterns. But extreme-wet winter seasons, accumulating at least as much precipitation as 2016–2017, will grow even more: they could be three times as common as they were before the atmosphere began its current warming trend.

While lots of very wet years—at least the ones that don’t reach 1861–1862 levels—might sound encouraging for food production in the Central Valley, there’s a catch, Swain said. His study looked purely at precipitation, independent of whether it fell as rain or snow. A growing body of research suggests that as the climate warms, California’s precipitation mix will shift significantly in favor of rain over snow. That’s dire news for our food system, because the Central Valley’s vast irrigation networks are geared to channeling the slow, predictable melt of the snowpack into usable water for farms. Water that falls as rain is much harder to capture and bend to the slow-release needs of agriculture.

In short, California’s climate, chaotic under normal conditions, is about to get weirder and wilder. Indeed, it’s already happening.

What if an 1862-level flood, which is overdue and “more likely than not” to occur with a couple of decades, were to hit present-day California?

Starting in 2008, the USGS set out to answer just that question, launching a project called the ARkStorm (for “atmospheric river 1,000 storm”) Scenario. The effort was modeled on a previous USGS push to get a grip on another looming California cataclysm: a massive earthquake along the San Andreas Fault. In 2008, USGS produced the ShakeOut Earthquake Scenario, a “detailed depiction of a hypothetical magnitude 7.8 earthquake.” The study “served as the centerpiece of the largest earthquake drill in US history, involving over five thousand emergency responders and the participation of over 5.5 million citizens,” the USGS later reported.

That same year, the agency assembled a team of 117 scientists, engineers, public-policy experts, and insurance experts to model what kind of impact a monster storm event would have on modern California.

At the time, Lucy Jones served as the chief scientist for the USGS’s Multi Hazards Demonstration Project, which oversaw both projects. A seismologist by training, Jones spent her time studying the devastations of earthquakes and convincing policy makers to invest resources into preparing for them. The ARkStorm project took her aback, she told me. The first thing she and her team did was ask, What’s the biggest flood in California we know about? “I’m a fourth-generation Californian who studies disaster risk, and I had never heard of the Great Flood of 1862,” she said. “None of us had heard of it,” she added—not even the meteorologists knew about what’s “by far the biggest disaster ever in California and the whole Southwest” over the past two centuries.

At first, the meteorologists were constrained in modeling a realistic megastorm by a lack of data; solid rainfall-gauge measures go back only a century. But after hearing about the 1862 flood, the ARkStorm team dug into research from Ingram and others for information about megastorms before US statehood and European contact. They were shocked to learn that the previous 1,800 years had about six events that were more severe than 1862, along with several more that were roughly of the same magnitude. What they found was that a massive flood is every bit as likely to strike California, and as imminent, as a massive quake.

Even with this information, modeling a massive flood proved more challenging than projecting out a massive earthquake. “We seismologists do this all the time—we create synthetic seismographs,” she said. Want to see what a quake reaching 7.8 on the Richter scale would look like along the San Andreas Fault? Easy, she said. Meteorologists, by contrast, are fixated on accurate prediction of near-future events; “creating a synthetic event wasn’t something they had ever done.” They couldn’t just re-create the 1862 event, because most of the information we have about it is piecemeal, from eyewitness accounts and sediment samples.

To get their heads around how to construct a reasonable approximation of a megastorm, the team’s meteorologists went looking for well-documented 20th-century events that could serve as a model. They settled on two: a series of big storms in 1969 that hit Southern California hardest and a 1986 cluster that did the same to the northern part of the state. To create the ARkStorm scenario, they stitched the two together. Doing so gave the researchers a rich and regionally precise trove of data to sketch out a massive Big One storm scenario.

There was one problem: While the fictional ARkStorm is indeed a massive event, it’s still significantly smaller than the one that caused the Great Flood of 1862. “Our [hypothetical storm] only had total rain for 25 days, while there were 45 days in 1861 to ’62,” Jones said. They plunged ahead anyway, for two reasons. One was that they had robust data on the two 20th-century storm events, giving disaster modelers plenty to work with. The second was that they figured a smaller-than-1862 catastrophe would help build public buy-in, by making the project hard to dismiss as an unrealistic figment of scaremongering bureaucrats.

What they found stunned them—and should stun anyone who relies on California to produce food (not to mention anyone who lives in the state). The headline number: $725 billion in damage, nearly four times what the USGS’s seismology team arrived at for its massive-quake scenario ($200 billion). For comparison, the two most costly natural disasters in modern US history—Hurricane Katrina in 2005 and Harvey in 2017—racked up $166 billion and $130 billion, respectively. The ARkStorm would “flood thousands of square miles of urban and agricultural land, result in thousands of landslides, [and] disrupt lifelines throughout the state for days or weeks,” the study reckoned. Altogether, 25 percent of the state’s buildings would be damaged.

In their model, 25 days of relentless rains overwhelm the Central Valley’s flood-control infrastructure. Then large swaths of the northern part of the Central Valley go under as much as 20 feet of water. The southern part, the San Joaquin Valley, gets off lighter; but a miles-wide band of floodwater collects in the lowest-elevation regions, ballooning out to encompass the expanse that was once the Tulare Lake bottom and stretching to the valley’s southern extreme. Most metropolitan parts of the Bay Area escape severe damage, but swaths of Los Angeles and Orange Counties experience “extensive flooding.”

As Jones stressed to me in our conversation, the ARkStorm scenario is a cautious approximation; a megastorm that matches 1862 or its relatively recent antecedents could plausibly bury the entire Central Valley underwater, northern tip to southern. As the report puts it: “Six megastorms that were more severe than 1861–1862 have occurred in California during the last 1800 years, and there is no reason to believe similar storms won’t occur again.”

A 21st-century megastorm would fall on a region quite different from gold rush–era California. For one thing, it’s much more populous. While the ARkStorm reckoning did not estimate a death toll, it warned of a “substantial loss of life” because “flood depths in some areas could realistically be on the order of 10–20 feet.”

Then there’s the transformation of farming since then. The 1862 storm drowned an estimated 200,000 head of cattle, about a quarter of the state’s entire herd. Today, the Central Valley houses nearly 4 million beef and dairy cows. While cattle continue to be an important part of the region’s farming mix, they no longer dominate it. Today the valley is increasingly given over to intensive almond, pistachio, and grape plantations, representing billions of dollars of investments in crops that take years to establish, are expected to flourish for decades, and could be wiped out by a flood.

Apart from economic losses, “the evolution of a modern society creates new risks from natural disasters,” Jones told me. She cited electric power grids, which didn’t exist in mid-19th-century California. A hundred years ago, when electrification was taking off, extended power outages caused inconveniences. Now, loss of electricity can mean death for vulnerable populations (think hospitals, nursing homes, and prisons). Another example is the intensification of farming. When a few hundred thousand cattle roamed the sparsely populated Central Valley in 1861, their drowning posed relatively limited biohazard risks, although, according to one contemporary account, in post-flood Sacramento, there were a “good many drowned hogs and cattle lying around loose in the streets.”

Today, however, several million cows are packed into massive feedlots in the southern Central Valley, their waste often concentrated in open-air liquid manure lagoons, ready to be swept away and blended into a fecal slurry. Low-lying Tulare County houses nearly 500,000 dairy cows, with 258 operations holding on average 1,800 cattle each. Mature modern dairy cows are massive creatures, weighing around 1,500 pounds each and standing nearly 5 feet tall at the front shoulder. Imagine trying to quickly move such beasts by the thousands out of the path of a flood—and the consequences of failing to do so.

A massive flood could severely pollute soil and groundwater in the Central Valley, and not just from rotting livestock carcasses and millions of tons of concentrated manure. In a 2015 paper, a team of USGS researchers tried to sum up the myriad toxic substances that would be stirred up and spread around by massive storms and floods. The cities of 160 years ago could not boast municipal wastewater facilities, which filter pathogens and pollutants in human sewage, nor municipal dumps, which concentrate often-toxic garbage. In the region’s teeming 21st-century urban areas, those vital sanitation services would become major threats. The report projects that a toxic soup of “petroleum, mercury, asbestos, persistent organic pollutants, molds, and soil-borne or sewage-borne pathogens” would spread across much of the valley, as would concentrated animal manure, fertilizer, pesticides, and other industrial chemicals.

The valley’s southernmost county, Kern, is a case study in the region’s vulnerabilities. Kern’s farmers lead the entire nation in agricultural output by dollar value, annually producing $7 billion worth of foodstuffs like almonds, grapes, citrus, pistachios, and milk. The county houses more than 156,000 dairy cows in facilities averaging 3,200 head each. That frenzy of agricultural production means loads of chemicals on hand; every year, Kern farmers use around 30 million pounds of pesticides, second only to Fresno among California counties. (Altogether, five San Joaquin Valley counties use about half of the more than 200 million pounds of pesticides applied in California.)

Kern is also one of the nation’s most prodigious oil-producing counties. Its vast array of pump jacks, many of them located in farm fields, produce 70 percent of California’s entire oil output. It’s also home to two large oil refineries. If Kern County were a state, it would be the nation’s seventh-leading oil-producing one, churning out twice as much crude as Louisiana. In a massive storm, floodwaters could pick up a substantial amount of highly toxic petroleum and byproducts. Again, in the ARkStorm scenario, Kern County gets hit hard by rain but mostly escapes the worst flooding. The real “Other Big One” might not be so kind, Jones said.

In the end, the USGS team could not estimate the level of damage that will be visited upon the Central Valley’s soil and groundwater from a megaflood: too many variables, too many toxins and biohazards that could be sucked into the vortex. They concluded that “flood-related environmental contamination impacts are expected to be the most widespread and substantial in lowland areas of the Central Valley, the Sacramento–San Joaquin River Delta, the San Francisco Bay area, and portions of the greater Los Angeles metroplex.”

Jones said the initial reaction to the 2011 release of the ARkStorm report among California’s policymakers and emergency managers was skepticism: “Oh, no, that’s too big—it’s impossible,” they would say. “We got lots of traction with the earthquake scenario, and when we did the big flood, nobody wanted to listen to us,” she said.

But after years of patiently informing the state’s decisionmakers that such a disaster is just as likely as a megaquake—and likely much more devastating—the word is getting out. She said the ARkStorm message probably helped prepare emergency managers for the severe storms of February 2017. That month, the massive Oroville Dam in the Sierra Nevada foothills very nearly failed, threatening to send a 30-foot-tall wall of water gushing into the northern Central Valley. As the spillway teetered on the edge of collapse, officials ordered the evacuation of 188,000 people in the communities below. The entire California National Guard was put on notice to mobilize if needed—the first such order since the 1992 Rodney King riots in Los Angeles. Although the dam ultimately held up, the Oroville incident illustrates the challenges of moving hundreds of thousands of people out of harm’s way on short notice.

The evacuation order “unleashed a flood of its own, sending tens of thousands of cars simultaneously onto undersize roads, creating hours-long backups that left residents wondering if they would get to high ground before floodwaters overtook them,” the Sacramento Bee reported. Eight hours after the evacuation, highways were still jammed with slow-moving traffic. A California Highway Patrol spokesman summed up the scene for the Bee:

Unprepared citizens who were running out of gas and their vehicles were becoming disabled in the roadway. People were utilizing the shoulder, driving the wrong way. Traffic collisions were occurring. People fearing for their lives, not abiding by the traffic laws. All combined, it created big problems. It ended up pure, mass chaos.

Even so, Jones said the evacuation went as smoothly as could be expected and likely would have saved thousands of lives if the dam had burst. “But there are some things you can’t prepare for.” Obviously, getting area residents to safety was the first priority, but animal inhabitants were vulnerable, too. If the dam had burst, she said, “I doubt they would have been able to save cattle.”

As the state’s ever-strained emergency-service agencies prepare for the Other Big One, there’s evidence other agencies are struggling to grapple with the likelihood of a megaflood. In the wake of the 2017 near-disaster at Oroville, state agencies spent more than $1 billion repairing the damaged dam and bolstering it for future storms. Just as work was being completed in fall 2018, the Federal Energy Regulatory Commission assessed the situation and found that a “probable maximum flood”—on the scale of the ArkStorm—would likely overwhelm the dam. FERC called on the state to invest in a “more robust and resilient design” to prevent a future cataclysm. The state’s Department of Water Resources responded by launching a “needs assessment” of the dam’s safety that’s due to wrap up in 2020.

Of course, in a state beset by the increasing threat of wildfires in populated areas as well as earthquakes, funds for disaster preparation are tightly stretched. All in all, Jones said, “we’re still much more prepared for a quake than a flood.” Then again, it’s hard to conceive of how we could effectively prevent a 21st century repeat of the Great Flood or how we could fully prepare for the low-lying valley that runs along the center of California like a bathtub—now packed with people, livestock, manure, crops, petrochemicals, and pesticides—to be suddenly transformed into a storm-roiled inland sea.

The aliens among us. How viruses shape the world (The Economist)

They don’t just cause pandemics

Leaders – Aug 22nd 2020 edition

HUMANS THINK of themselves as the world’s apex predators. Hence the silence of sabre-tooth tigers, the absence of moas from New Zealand and the long list of endangered megafauna. But SARSCoV-2 shows how people can also end up as prey. Viruses have caused a litany of modern pandemics, from covid-19, to HIV/AIDS to the influenza outbreak in 1918-20, which killed many more people than the first world war. Before that, the colonisation of the Americas by Europeans was abetted—and perhaps made possible—by epidemics of smallpox, measles and influenza brought unwittingly by the invaders, which annihilated many of the original inhabitants.

The influence of viruses on life on Earth, though, goes far beyond the past and present tragedies of a single species, however pressing they seem. Though the study of viruses began as an investigation into what appeared to be a strange subset of pathogens, recent research puts them at the heart of an explanation of the strategies of genes, both selfish and otherwise.

Viruses are unimaginably varied and ubiquitous. And it is becoming clear just how much they have shaped the evolution of all organisms since the very beginnings of life. In this, they demonstrate the blind, pitiless power of natural selection at its most dramatic. And—for one group of brainy bipedal mammals that viruses helped create—they also present a heady mix of threat and opportunity.

As our essay in this week’s issue explains, viruses are best thought of as packages of genetic material that exploit another organism’s metabolism in order to reproduce. They are parasites of the purest kind: they borrow everything from the host except the genetic code that makes them what they are. They strip down life itself to the bare essentials of information and its replication. If the abundance of viruses is anything to go by, that is a very successful strategy indeed.

The world is teeming with them. One analysis of seawater found 200,000 different viral species, and it was not setting out to be comprehensive. Other research suggests that a single litre of seawater may contain more than 100bn virus particles, and a kilo of dried soil ten times that number. Altogether, according to calculations on the back of a very big envelope, the world might contain 1031 of the things—that is one followed by 31 zeros, far outnumbering all other forms of life on the planet.

As far as anyone can tell, viruses—often of many different sorts—have adapted to attack every organism that exists. One reason they are powerhouses of evolution is that they oversee a relentless and prodigious slaughter, mutating as they do so. This is particularly clear in the oceans, where a fifth of single-celled plankton are killed by viruses every day. Ecologically, this promotes diversity by scything down abundant species, thus making room for rarer ones. The more common an organism, the more likely it is that a local plague of viruses specialised to attack it will develop, and so keep it in check.

This propensity to cause plagues is also a powerful evolutionary stimulus for prey to develop defences, and these defences sometimes have wider consequences. For example, one explanation for why a cell may deliberately destroy itself is if its sacrifice lowers the viral load on closely related cells nearby. That way, its genes, copied in neighbouring cells, are more likely to survive. It so happens that such altruistic suicide is a prerequisite for cells to come together and form complex organisms, such as pea plants, mushrooms and human beings.

The other reason viruses are engines of evolution is that they are transport mechanisms for genetic information. Some viral genomes end up integrated into the cells of their hosts, where they can be passed down to those organisms’ descendants. Between 8% and 25% of the human genome seems to have such viral origins. But the viruses themselves can in turn be hijacked, and their genes turned to new uses. For example, the ability of mammals to bear live young is a consequence of a viral gene being modified to permit the formation of placentas. And even human brains may owe their development in part to the movement within them of virus-like elements that create genetic differences between neurons within a single organism.

Evolution’s most enthralling insight is that breathtaking complexity can emerge from the sustained, implacable and nihilistic competition within and between organisms. The fact that the blind watchmaker has equipped you with the capacity to read and understand these words is in part a response to the actions of swarms of tiny, attacking replicators that have been going on, probably, since life first emerged on Earth around 4bn years ago. It is a startling example of that principle in action—and viruses have not finished yet.

Humanity’s unique, virus-chiselled consciousness opens up new avenues to deal with the viral threat and to exploit it. This starts with the miracle of vaccination, which defends against a pathogenic attack before it is launched. Thanks to vaccines, smallpox is no more, having taken some 300m lives in the 20th century. Polio will one day surely follow. New research prompted by the covid-19 pandemic will enhance the power to examine the viral realm and the best responses to it that bodies can muster—taking the defence against viruses to a new level.

Another avenue for progress lies in the tools for manipulating organisms that will come from an understanding of viruses and the defences against them. Early versions of genetic engineering relied on restriction enzymes—molecular scissors with which bacteria cut up viral genes and which biotechnologists employ to move genes around. The latest iteration of biotechnology, gene editing letter by letter, which is known as CRISPR, makes use of a more precise antiviral mechanism.

From the smallest beginnings

The natural world is not kind. A virus-free existence is an impossibility so deeply unachievable that its desirability is meaningless. In any case, the marvellous diversity of life rests on viruses which, as much as they are a source of death, are also a source of richness and of change. Marvellous, too, is the prospect of a world where viruses become a source of new understanding for humans—and kill fewer of them than ever before. ■

Correction: An earlier version of this article got its maths wrong. 1031 is one followed by 31 zeroes, not ten followed by 31 zeroes as we first wrote. Sorry.