Stories may be the most overlooked climate solution of all. By
December 23, 2021
Devi Lockwood
There is a lot of shouting about climate change, especially in North America and Europe. This makes it easy for the rest of the world to fall into a kind of silence—for Westerners to assume that they have nothing to add and should let the so-called “experts” speak. But we all need to be talking about climate change and amplifying the voices of those suffering the most.
Climate science is crucial, but by contextualizing that science with the stories of people actively experiencing climate change, we can begin to think more creatively about technological solutions.
This needs to happen not only at major international gatherings like COP26, but also in an everyday way. In any powerful rooms where decisions are made, there should be people who can speak firsthand about the climate crisis. Storytelling is an intervention into climate silence, an invitation to use the ancient human technology of connecting through language and narrative to counteract inaction. It is a way to get often powerless voices into powerful rooms.
That’s what I attempted to do by documenting stories of people already experiencing the effects of a climate in crisis.
In 2013, I was living in Boston during the marathon bombing. The city was put on lockdown, and when it lifted, all I wanted was to go outside: to walk and breathe and hear the sounds of other people. I needed to connect, to remind myself that not everyone is murderous. In a fit of inspiration, I cut open a broccoli box and wrote “Open call for stories” in Sharpie.
I wore the cardboard sign around my neck. People mostly stared. But some approached me. Once I started listening to strangers, I didn’t want to stop.
That summer, I rode my bicycle down the Mississippi River on a mission to listen to any stories that people had to share. I brought the sign with me. One story was so sticky that I couldn’t stop thinking about it for months, and it ultimately set me off on a trip around the world.
“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane. I couldn’t imagine living anywhere else.”
I met 57-year-old Franny Connetti 80 miles south of New Orleans, when I stopped in front of her office to check the air in my tires; she invited me in to get out of the afternoon sun. Franny shared her lunch of fried shrimp with me. Between bites she told me how Hurricane Isaac had washed away her home and her neighborhood in 2012.
Despite that tragedy, she and her husband moved back to their plot of land, in a mobile home, just a few months after the storm.
“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane,” she told me. “I couldn’t imagine living anywhere else.”
Twenty miles ahead, I could see where the ocean lapped over the road at high tide. “Water on Road,” an orange sign read. Locals jokingly refer to the endpoint of Louisiana State Highway 23 as “The End of the World.” Imagining the road I had been biking underwater was chilling.
The author at Monasavu Dam in Fiji in 2014.
Here was one front line of climate change, one story. What would it mean, I wondered, to put this in dialogue with stories from other parts of the world—from other front lines with localized impacts that were experienced through water? My goal became to listen to and amplify those stories.
Water is how most of the world will experience climate change. It’s not a human construct, like a degree Celsius. It’s something we acutely see and feel. When there’s not enough water, crops die, fires rage, and people thirst. When there’s too much, water becomes a destructive force, washing away homes and businesses and lives. It’s almost always easier to talk about water than to talk about climate change. But the two are deeply intertwined.
I also set out to address another problem: the language we use to discuss climate change is often abstract and inaccessible. We hear about feet of sea-level rise or parts per million of carbon dioxide in the atmosphere, but what does this really mean for people’s everyday lives? I thought storytelling might bridge this divide.
One of the first stops on my journey was Tuvalu, a low-lying coral atoll nation in the South Pacific, 585 miles south of the equator. Home to around 10,000 people, Tuvalu is on track to become uninhabitable in my lifetime.
In 2014 Tauala Katea, a meteorologist, opened his computer to show me an image of a recent flood on one island. Seawater had bubbled up under the ground near where we were sitting. “This is what climate change looks like,” he said.
“In 2000, Tuvaluans living in the outer islands noticed that their taro and pulaka crops were suffering,” he said. “The root crops seemed rotten, and the size was getting smaller and smaller.” Taro and pulaka, two starchy staples of Tuvaluan cuisine, are grown in pits dug underground.
Tauala and his team traveled to the outer islands to take soil samples. The culprit was saltwater intrusion linked to sea-level rise. The seas have been rising four millimeters per year since measurements began in the early 1990s. While that might sound like a small amount, this change has a dramatic impact on Tuvaluans’ access to drinking water. The highest point is only 13 feet above sea level.
A lot has changed in Tuvalu as a result. The freshwater lens, a layer of groundwater that floats above denser seawater, has become salty and contaminated. Thatched roofs and freshwater wells are now a thing of the past. Each home now has a water tank attached to a corrugated-iron roof by a gutter. All the water for washing, cooking, and drinking now comes from the rain. This rainwater is boiled for drinking and used to wash clothes and dishes, as well as for bathing. The wells have been repurposed as trash heaps.
At times, families have to make tough decisions about how to allocate water. Angelina, a mother of three, told me that during a drought a few years ago, her middle daughter, Siulai, was only a few months old. She, her husband, and their oldest daughter could swim in the sea to wash themselves and their clothes. “We only saved water to drink and cook,” she said. But her newborn’s skin was too delicate to bathe in the ocean. The salt water would give her a horrible rash. That meant Angelina had to decide between having water to drink and to bathe her child.
The stories I heard about water and climate change in Tuvalu reflected a sharp division along generational lines. Tuvaluans my age—like Angelina—don’t see their future on the islands and are applying for visas to live in New Zealand. Older Tuvaluans see climate change as an act of God and told me they couldn’t imagine living anywhere else; they didn’t want to leave the bones of their ancestors, which were buried in their front yards. Some things just cannot be moved.
Organizations like the United Nations Development Programme are working to address climate change in Tuvalu by building seawalls and community water tanks. Ultimately these adaptations seem to be prolonging the inevitable. It is likely that within my lifetime, many Tuvaluans will be forced to call somewhere else home.
Tuvalu shows how climate change exacerbates both food and water insecurity—and how that insecurity drives migration. I saw this in many other places. Mess with the amount of water available in one location, and people will move.
In Thailand I met a modern dancer named Sun who moved to Bangkok from the rural north. He relocated to the city in part to practice his art, but also to take refuge from unpredictable rain patterns. Farming in Thailand is governed by the seasonal monsoons, which dump rain, fill river basins, and irrigate crops from roughly May to September. Or at least they used to. When we spoke in late May 2016, it was dry in Thailand. The rains were delayed. Water levels in the country’s biggest dams plummeted to less than 10% of their capacity—the worst drought in two decades.
“Right now it’s supposed to be the beginning of the rainy season, but there is no rain,” Sun told me. “How can I say it? I think the balance of the weather is changing. Some parts have a lot of rain, but some parts have none.” He leaned back in his chair, moving his hands like a fulcrum scale to express the imbalance. “That is the problem. The people who used to be farmers have to come to Bangkok because they want money and they want work,” he said. “There is no more work because of the weather.”
A family celebrates Nunavut Day near the waterfront in Igloolik, Nunavut, in 2018.
Migration to the city, in other words, is hastened by the rain. Any tech-driven climate solutions that fail to address climate migration—so central to the personal experience of Sun and many others in his generation around the world—will be at best incomplete, and at worst potentially dangerous. Solutions that address only one region, for example, could exacerbate migration pressures in another.
I heard stories about climate-driven food and water insecurity in the Arctic, too. Igloolik, Nunavut, 1,400 miles south of the North Pole, is a community of 1,700 people. Marie Airut, a 71-year-old elder, lives by the water. We spoke in her living room over cups of black tea.
“My husband died recently,” she told me. But when he was alive, they went hunting together in every season; it was their main source of food. “I’m not going to tell you what I don’t know. I’m going to tell you only the things that I have seen,” she said. In the 1970s and ’80s, the seal holes would open in late June, an ideal time for hunting baby seals. “But now if I try to go out hunting at the end of June, the holes are very big and the ice is really thin,” Marie told me. “The ice is melting too fast. It doesn’t melt from the top; it melts from the bottom.”
When the water is warmer, animals change their movement. Igloolik has always been known for its walrus hunting. But in recent years, hunters have had trouble reaching the animals. “I don’t think I can reach them anymore, unless you have 70 gallons of gas. They are that far now, because the ice is melting so fast,” Marie said. “It used to take us half a day to find walrus in the summer, but now if I go out with my boys, it would probably take us two days to get some walrus meat for the winter.”
Marie and her family used to make fermented walrus every year, “but this year I told my sons we’re not going walrus hunting,” she said. “They are too far.”
Previsões incluem incerteza e instabilidade crescentes e mais polarização e populismo
A Comunidade de Inteligência dos EUA (CI), federação de 17 agências governamentais independentes que realizam atividades de inteligência, divulgou uma pesquisa sobre o estado do mundo em 2040.
E o futuro é sombrio: o estudo alerta para uma volatilidade política e crescente competição internacional ou mesmo conflito.
O relatório intitulado “Globo Trends 2040 – A More Contested World” (“Tendências Globais 2040 – Um Mundo Mais Disputado”, em português) é uma tentativa de analisar as principais tendências, descrevendo uma série de cenários possíveis.
É o sétimo relatório desse tipo, publicado a cada quatro anos pelo Conselho Nacional de Inteligência desde 1997.
Não se trata de uma leitura relaxante para quem é um líder político ou diplomata internacional – ou espera ser um nos próximos anos.
Em primeiro lugar, o relatório foca nos fatores-chave que vão impulsionar a mudança.
Um deles é a volatilidade política.
“Em muitos países, as pessoas estão pessimistas sobre o futuro e estão cada vez mais desconfiadas de líderes e instituições que consideram incapazes ou relutantes em lidar com tendências econômicas, tecnológicas e demográficas disruptivas”, adverte o relatório.
Tensão entre EUA e China pode dividir o mundo, diz relatório
Democracias vulneráveis
O estudo argumenta que as pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer.
“Essa incompatibilidade entre as habilidades dos governos e as expectativas do público tende a se expandir e levar a mais volatilidade política, incluindo crescente polarização e populismo dentro dos sistemas políticos, ondas de ativismo e movimentos de protesto e, nos casos mais extremos, violência, conflito interno, ou mesmo colapso do estado”, diz o relatório.
Expectativas não atendidas, alimentadas por redes sociais e tecnologia, podem criar riscos para a democracia.
“Olhando para o futuro, muitas democracias provavelmente serão vulneráveis a uma erosão e até mesmo ao colapso”, adverte o texto, acrescentando que essas pressões também afetarão os regimes autoritários.
Pandemia, uma ‘grande ruptura global’
O relatório afirma que a atual pandemia é a “ruptura global mais significativa e singular desde a 2ª Guerra Mundial”, que alimentou divisões, acelerou as mudanças existentes e desafiou suposições, inclusive sobre como os governos podem lidar com isso.
Analistas previram ‘grande pandemia de 2023’, mas não associaram à covid
O último relatório, de 2017, previu a possibilidade de uma “pandemia global em 2023” reduzir drasticamente as viagens globais para conter sua propagação.
Os autores reconhecem, no entanto, que não esperavam o surgimento da covid-19, que dizem ter “abalado suposições antigas sobre resiliência e adaptação e criado novas incertezas sobre a economia, governança, geopolítica e tecnologia”.
As mudanças climáticas e demográficas também vão exercer um impacto primordial sobre o futuro do mundo, assim como a tecnologia, que pode ser prejudicial, mas também trazer oportunidades para aqueles que a utilizarem de maneira eficaz e primeiro.
Competição geopolítica
Internacionalmente, os analistas esperam que a intensidade da competição pela influência global alcance seu nível mais alto desde a Guerra Fria nas próximas duas décadas em meio ao enfraquecimento contínuo da velha ordem, enquanto instituições como as Nações Unidas enfrentam dificuldades.
Pessoas estão gravitando em torno de grupos com ideias semelhantes e fazendo demandas maiores e mais variadas aos governos em um momento em que esses mesmos governos estão cada vez mais limitados no que podem fazer, diz relatório
Organizações não-governamentais, incluindo grupos religiosos e as chamadas “empresas superestrelas da tecnologia” também podem ter a capacidade de construir redes que competem com – ou até mesmo – driblam os Estados.
O risco de conflito pode aumentar, tornando-se mais difícil impedir o uso de novas armas.
O terrorismo jihadista provavelmente continuará, mas há um alerta de que terroristas de extrema direita e esquerda que promovem questões como racismo, ambientalismo e extremismo antigovernamental possam ressurgir na Europa, América Latina e América do Norte.
Os grupos podem usar inteligência artificial para se tornarem mais perigosos ou usar realidade aumentada para criar “campos de treinamento de terroristas virtuais”.
A competição entre os EUA e a China está no centro de muitas das diferenças nos cenários – se um deles se torna mais bem-sucedido ou se os dois competem igualmente ou dividem o mundo em esferas de influência separadas.
Um relatório de 2004 também previu um califado emergindo do Oriente Médio, como o que o autodenominado Estado Islâmico tentou criar na última década, embora o mesmo estudo – olhando para 2020 – não tenha capturado a competição com a China, que agora domina as preocupações de segurança dos EUA.
O objetivo geral é analisar futuros possíveis, em vez de acertar previsões.
Democracias mais fortes ou ‘mundo à deriva’?
Existem alguns cenários otimistas para 2040 – um deles foi chamado de “o renascimento das democracias”.
Isso envolve os EUA e seus aliados aproveitando a tecnologia e o crescimento econômico para lidar com os desafios domésticos e internacionais, enquanto as repressões da China e da Rússia (inclusive em Hong Kong) sufocam a inovação e fortalecem o apelo da democracia.
Mas outros são mais desanimadores.
“O cenário do mundo à deriva” imagina as economias de mercado nunca se recuperando da pandemia de Covid, tornando-se profundamente divididas internamente e vivendo em um sistema internacional “sem direção, caótico e volátil”, já que as regras e instituições internacionais são ignoradas por países, empresas e outros grupos.
Um cenário, porém, consegue combinar pessimismo com otimismo.
“Tragédia e mobilização” prevê um mundo em meio a uma catástrofe global no início de 2030, graças às mudanças climáticas, fome e agitação – mas isso, por sua vez, leva a uma nova coalizão global, impulsionada em parte por movimentos sociais, para resolver esses problemas.
Claro, nenhum dos cenários pode acontecer ou – mais provavelmente – uma combinação deles ou algo totalmente novo pode surgir. O objetivo, dizem os autores, é se preparar para uma série de futuros possíveis – mesmo que muitos deles pareçam longe de ser otimistas.
Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.
Emily Mullin
August 25, 2021
In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together.
At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness?
Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.”
New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts.
But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk.
In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun.
An impossible dream?
A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever?
From the beginning, both projects had critics.
EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with.
In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful.
Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.
Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)
There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.
While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.
In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project.
In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain.
“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.”
The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”
But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.
The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain.
New day
Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged.
And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.
The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.
While these are all important steps forward, though, they’re far from the initial grand ambitions.
Lasting legacy
We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.
When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system.
“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.”
Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity.
Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes.
For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.
“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.”
Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is.
“I have to be honest,” says Yuste. “We had higher hopes.”
Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.
The latest landmark climate science report goes much further than previous ones in providing estimates of how bad things might get as the planet heats up, even if a lack of data may mean it underestimates the perils.
Scientists have used the seven years since the previous assessment report of the Intergovernmental Panel of Climate Change (IPCC) to narrow the uncertainties around major issues, such as how much the planet will warm if we double atmospheric levels of carbon dioxide and other greenhouse gases.
While temperatures have risen largely in lockstep with rising CO2, this IPCC report examines in much more detail the risks of so-called abrupt changes, when relatively stable systems abruptly and probably irreversibly shift to a new state.
Michael Mann, director of the Pennsylvania State University’s Earth System Science and one of the world’s most prominent climate researchers, says the models are not capturing all the risks as the climate heats up.
Running AMOC
Perhaps the most prominent of these threats is a possible stalling of the Atlantic Meridional Overturning Circulation (AMOC). Also known as the Gulf Stream, it brings tropic water north from the Caribbean, keeping northern Europe much warmer than its latitude might otherwise suggest, and threatening massive disruptions if it slows or stops.
“Where the models have underestimated the impact is with projections of ice melt, the AMOC, and – I argue in my own work – the uptick on extreme weather events,” Professor Mann tells the Herald and The Age.
Stefan Rahmstorf, head of research at the Potsdam Institute for Climate Impact Research, agrees that climate models have not done a good job of reproducing the so-called cold blob in the subpolar Atlantic that is forming where melting Greenland ice is cooling the subpolar Atlantic.
Breaking up: The US Coast Guard Icebreaker Healy on a research cruise in the Chukchi Sea of the Arctic Ocean. Credit:AP
If they are not picking that blob up, “should we trust those models on AMOC stability?” Professor Rahmstorf asks.
The IPCC’s language, too, doesn’t necessarily convey the nature of the threat, much of which will be detailed in the second AR6 report on the impacts of climate change, scheduled for release next February.
“Like just stating the AMOC collapse by 2100 is ‘very unlikely’ – that was in a previous report – it sounds reassuring,” Professor Rahmstorf said. “Now the IPCC says they have ‘medium confidence’ that it won’t happen by 2100, whatever that means.”
West Antarctica has enough ice to raise global sea levels by more than 3 metres if it melts.Credit:Ian Joughin
West Antarctic melt
Another potential tipping point is the possible disintegration of the West Antarctic ice sheet. Much of the sheet lies below sea level and as the Southern Ocean warms, it will melt causing it to “flow” towards the sea in a process that is expected to be self-sustaining.
This so-called marine ice sheet instability is identified in the IPCC report as likely resulting in ice mass loss under all emissions scenarios. There is also “deep uncertainty in projections for above 3 degrees of warming”, the report states.
Containing enough water to lift sea levels by 3.3 metres, it matters what happens to the ice sheet. As Andrew Mackintosh, an ice expert at Monash University, says, the understanding is limited: “We know more about the surface of Mars than the ice sheet bed under the ice.”
Permafrost not so permanent
Much has been made about the so-called “methane bomb” sitting under the permafrost in the northern hemisphere. As the Arctic has warmed at more than twice the pace of the globe overall, with heatwaves of increasing intensity and duration, it is not surprising that the IPCC has listed the release of so-called biogenic emissions from permafrost thaw as among potential tipping points.
These emissions could total up to 240 gigatonnes of CO2-equivalent which, if released, would add an unwanted warming boost.
The IPCC lists as “high” the probability of such releases during this century, adding there is “high confidence” that the process is irreversible at century scales.
“In some cases abrupt changes can occur in Earth System Models but don’t on the timescales of the projections (for example, an AMOC collapse),” said Peter Cox, a Professor of Climate System Dynamics at the UK’s University of Exeter. “In other cases the processes involved are not yet routinely included in ESMs [such as] CO2 and methane release from deep permafrost.”
“In the latter cases IPCC statements are made on the basis of the few studies available, and are necessarily less definitive,” he said.
Other risks
From the Amazon rainforest to the boreal forests of Russia and Canada, there is a risk of fire and pests that could trigger dieback and transform those regions.
Australia’s bush faces an increased risk of bad fire weather days right across the continent, the IPCC notes. How droughts, heatwaves and heavy rain and other extreme events will play out at a local level is also not well understood.
Ocean acidification and marine heatwaves also mean the world’s coral reefs will be much diminished at more than 1.5 degrees of warming. “You can kiss it goodbye as we know it,” says Sarah Perkins-Kirkpatrick, a climate researcher at the University of NSW, said.
Global monsoons, which affect billions of people including those on the Indian subcontinent, are likely to increase their rainfall in most parts of the world, the IPCC said.
Andy Pitman, director of the ARC Centre of Excellence for Climate Extremes, said policymakers need to understand that much is riding on these tipping points not being triggered as even one or two of them would have long-lasting and significant effects. “How lucky do you feel?” Professor Pitman says.
The Biggest uncertainty
Christian Jakob, a Monash University climate researcher, said that while there remain important uncertainties, science is honing most of those risks down.
Much harder to gauge, though, is which emissions path humans are going to take. Picking between the five scenarios ranging from low to high that we are going to choose is “much larger than the uncertainty we have in the science,” Professor Jakob said.
The word has been out for decades: We were born on a damaged planet careening toward environmental collapse. Yet our intellects are poorly equipped to grasp the scale of the Earth’s ecological death spiral. We strain to picture how, in just a few decades, climate change may displace entire populations. We struggle to envision the fate of plastic waste that will outlast us by centuries. We fail to imagine our descendants inhabiting an exhausted Earth worn out from resource extraction and devoid of biodiversity. We lack frames of reference in our everyday lives for thinking about nuclear waste’s multimillennial timescales of radioactive hazard.
I am an anthropologist who studies how societies hash out relationships between living communities of the present and unborn communities imagined to inhabit the future. Studying how a community relates to the passage of time, I’ve learned, can offer a window into its values, worldviews, and lifeways.
From 2012 to 2014, I conducted 32 months of anthropological fieldwork exploring how Finland’s nuclear energy waste experts grappled with Earth’s radically long-term future. These experts routinely dealt with long-lived radionuclides such as uranium-235, which has a half-life of over 700 million years. They worked with the nuclear waste management company Posiva to help build a final disposal facility approximately 450 meters below the islet of Olkiluoto in the Gulf of Bothnia in the Baltic Sea. If all goes according to plan, this facility will, in the mid-2020s, become the world’s first operating deep geologic repository for spent nuclear fuel.
To assess the Olkiluoto repository’s long-term durability, these experts developed a “safety case” forecasting geological, hydrological, and ecological events that could potentially occur in Western Finland over the coming tens of thousands — or even hundreds of thousands — of years. From their efforts emerged visions of distant future glaciations, climate changes, earthquakes, floods, human and animal population changes, and more. These forecasts became the starting point for a series of “mental time travel” exercises that I incorporated into my book, “Deep Time Reckoning.”
Stretching the mind across time — even in the most speculative ways — can help us become more responsible planetary stewards: It can help endow us with the time literacy necessary for tackling long-term challenges such as biodiversity loss, microplastics accumulation, climate change, antibiotic resistance, asteroid impacts, sustainable urban planning, and more. This can not only make us feel more at home in pondering our planet’s pasts and futures. It can also draw us to imagine the world from the perspective of future human and non-human communities — fostering empathy across generations.
5710 CE. A tired man lounges on a sofa. He lives in a small wooden house in a region once called Eurajoki, Finland. He works at a local medical center. Today is his day off. He’s had a long day in the forest. He hunted moose and deer and picked lingonberries, mushrooms, and bilberries. He now sips water, drawn from a village well, from a wooden cup. His husband brings him a dinner plate. On it are fried potatoes, cereal, boiled peas, and beef. All the food came from local farms. The cattle were watered at a nearby river. The crops were watered by irrigation channels flowing from three local lakes.
The man has no idea that, more than 3,700 years ago, safety case biosphere modelers used 21st-century computer technologies to reckon everyday situations like his. He does not know that they once named the lakes around him — which formed long after their own deaths — “Liiklanjärvi,” “Tankarienjärvi,” and “Mäntykarinjärvi.” He is unaware of Posiva’s ancient determination that technological innovation and cultural habits are nearly impossible to predict even decades in advance. He is unaware that Posiva, in response, instructed its modelers to pragmatically assume that Western Finland’s populations’ lifestyles, demographic patterns, and nutritional needs will not change much over the next 10,000 years. He does not know the safety case experts inserted, into their models’ own parameters, the assumption that he and his neighbors would eat only local food.
Yet the hunter’s life is still entangled with the safety case experts’ work. If they had been successful, then the vegetables, meat, fruit, and water before him should have just a tiny chance of containing only tiny traces of radionuclides from 20th-century nuclear power plants.
12020 CE. A solitary farmer looks out over her pasture, surrounded by a green forest of heath trees. She lives in a sparse land once called Finland, on a fertile island plot once called Olkiluoto. The area is an island no longer. What was once a coastal bay is now dotted with small lakes, peat bogs, and mires with white sphagnum mosses and grassy sedge plants. The Eurajoki and Lapijoki Rivers drain out into the sea. When the farmer goes fishing at the lake nearby, she catches pike. She watches a beaver swim about. Sometimes she feels somber. She recalls the freshwater ringed seals that once shared her country before their extinction.
The woman has no idea that, deep beneath her feet, lies an ancestral deposit of copper, iron, clay, and radioactive debris. This is a highly classified secret — leaked to the public several times over the millennia, but now forgotten. Yet even the government’s knowledge of the burial site is poor. Most records were destroyed in a global war in the year 3112. It was then that ancient forecasts of the site, found in the 2012 safety case report “Complementary Considerations,” were lost to history.
But the farmer does know the mythical stories of Lohikäärme: a dangerous, flying, salmon-colored venomous snake that kills anyone who dares dig too close to his underground cave. She and the other farmers in the area grow crops of peas, sugar beet, and wheat. They balk at the superstitious fools who tell them the monster living beneath their feet is real.
35,012 CE. A tiny microbe floats in a large, northern lake. It does not know that the clay, silt, and mud floor below it is gaining elevation, little by little, year after year. It is unaware that, 30 millennia ago, the lake was a vast sea. Dotted with sailboats, cruise and cargo ships, it was known by humans as the Baltic. Watery straits, which connected the Baltic Sea to the North Sea, had risen above the water thousands of years ago. Denmark and Sweden fused into a single landmass. The seafloor was decompressing from the Weichselian glaciation — an enormous sheet of ice that pressed down on the land during a previous ice age.
After the last human died, the landmass kept on rising. Its uplift was indifferent to human extinction. It was indifferent to how, in 2013 CE, an anthropologist and a safety case expert sat chatting in white chairs in Ravintola Rytmi: a café in Helsinki. There, the safety case expert relayed his projection that, by 52,000 CE, there would no longer be water separating Turku, Finland, and Stockholm, Sweden. At that point, one could walk from one city to the other on foot. The expert reckoned that, to the north — between Vaasa, Finland, and Umeå, Sweden — one would someday find a waterfall with the planet’s largest deluge of flowing water. The waterfall could be found at the site of a once-submerged sea shelf.
The microbe, though, does not know or care about Vaasa, Umeå, Denmark, long-lost boats, safety case reports, or Helsinki’s past dining options. It has no concept of them. Their significances died with the humans. Nor does the microbe grasp the suffering they faced when succumbing to Anthropocene collapse. Humans’ past technological feats, grand civilizations, passion projects, intellectual triumphs, wartime sacrifices, and personal struggles are now moot. And yet, the radiological safety of the microbe’s lake’s waters still hinges on the work of a handful of human safety case experts who lived millennia ago. Thinking so far ahead, these experts never lived to see whether their deep time forecasts were accurate.
We do not, of course, live in these imagined worlds. In this sense, they are unreal — merely fictions. However, our capacities to envision potential futures, and to feel empathy for those who may inhabit them, are very real. Depictions of tomorrow can have powerful, concrete effects on the world today. This is why deep time thought experiments are not playful games, but serious acts of intellectual problem-solving. It is why the safety case experts’ models of far future nuclear waste risks are uniquely valuable, even if they are, at the end of the day, mere approximations.
Yet pondering distant future Earths can also help us take a step back from our everyday lives — enriching our imaginations by transporting our minds to different places and times. Corporate coaches have recommended taking breaks from our familiar thinking patterns to experience the world in new ways and overcome mental blocks. Cognitive scientists have shown how creativity can be sparked by perceiving “something one has not seen before (but that was probably always there).”
Putting aside a few minutes each day for long-termist, planetary imagination can enrich us with greater mental dexterity in navigating between multiple, interacting timescales. This can cultivate more longsighted empathy for landscapes, people, and other organisms across decades, centuries, and millennia. As the global ecological crisis takes hold, embracing planetary empathy will prove essential to our collective survival.
—
Vincent Ialenti is a Research Fellow at The University of Southern California and The Berggruen Institute. His recent book, “Deep Time Reckoning,” is an anthropological study of how Finland’s nuclear waste repository experts grappled with distant future ecosystems and the limits of human knowledge.
As the Intergovernmental Panel on Climate Change (IPCC) released its Sixth Assessment Report, summarized nicely on these pages by Bob Henson, much of the associated media coverage carried a tone of inevitable doom.
These proclamations of unavoidable adverse outcomes center around the fact that in every scenario considered by IPCC, within the next decade average global temperatures will likely breach the aspirational goal set in the Paris climate agreement of limiting global warming to 1.5 degrees Celsius (2.7 degrees Fahrenheit) above pre-industrial temperatures. The report also details a litany of extreme weather events like heatwaves, droughts, wildfires, floods, and hurricanes that will all worsen as long as global temperatures continue to rise.
While United Nations Secretary-General António Guterres rightly called the report a “code red for humanity,” tucked into it are details illustrating that if – BIG IF –top-emitting countries respond to the IPCC’s alarm bells with aggressive efforts to curb carbon pollution, the worst climate outcomes remain avoidable.
The IPCC’s future climate scenarios
In the Marvel film Avengers: Infinity War, the Dr. Strange character goes forward in time to view 14,000,605 alternate futures to see all the possible outcomes of the Avengers’ coming conflict. Lacking the fictional Time Stone used in this gambit, climate scientists instead ran hundreds of simulations of several different future carbon emissions scenarios using a variety of climate models. Like Dr. Strange, climate scientists’ goal is to determine the range of possible outcomes given different actions taken by the protagonists: in this case, various measures to decarbonize the global economy.
The scenarios considered by IPCC are called Shared Socioeconomic Pathways (SSPs). The best-case climate scenario, called SSP1, involves a global shift toward sustainable management of global resources and reduced inequity. The next scenario, SSP2, is more of a business-as-usual path with slow and uneven progress toward sustainable development goals and persisting income inequality and environmental degradation. SSP3 envisions insurgent nationalism around the world with countries focusing on their short-term domestic best interests, resulting in persistent and worsening inequality and environmental degradation. Two more scenarios, SSP4 and SSP5, consider even greater inequalities and fossil fuel extraction, but seem at odds with an international community that has agreed overwhelmingly to aim for the Paris climate targets.
The latest IPCC report’s model runs simulated two SSP1 scenarios that would achieve the Paris targets of limiting global warming to 1.5 and 2°C (2.7 and 3.6°F); one SSP2 scenario in which temperatures approach 3°C (5.4°F) in the year 2100; an SSP3 scenario with about 4°C (7.2°F) global warming by the end of the century; and one SSP5 ‘burn all the fossil fuels possible’ scenario resulting in close to 5°C (9°F), again by 2100.
Projected global average surface temperature change in each of the five SSP scenarios. (Source: IPCC Sixth Assessment Report)
The report’s SSP3-7.0 pathway (the latter number represents the eventual global energy imbalance caused by the increased greenhouse effect, in watts per square meter), is considered by many experts to be a realistic worst-case scenario, with global carbon emissions continuing to rise every year throughout the 21st century. Such an outcome would represent a complete failure of international climate negotiations and policies and would likely result in catastrophic consequences, including widespread species extinctions, food and water shortages, and disastrous extreme weather events.
Scenario SSP2-4.5 is more consistent with government climate policies that are currently in place. It envisions global carbon emissions increasing another 10% over the next decade before reaching a plateau that’s maintained until carbon pollution slowly begins to decline starting in the 2050s. Global carbon emissions approach but do not reach zero by the end of the century. Even in this unambitious scenario, the very worst climate change impacts might be averted, although the resulting climate impacts would be severe.
Most encouragingly, the report’s two SSP1 scenarios illustrate that the Paris targets remain within reach. To stay below the main Paris target of 2°C (3.6°F) warming, global carbon emissions in SSP1-2.6 plateau essentially immediately and begin to decline after 2025 at a modest rate of about 2% per year for the first decade, then accelerating to around 3% per year the next decade, and continuing along a path of consistent year-to-year carbon pollution cuts before reaching zero around 2075. The IPCC concluded that once global carbon emissions reach zero, temperatures will stop rising. Toward the end of the century, emissions in SSP1-2.6 move into negative territory as the IPCC envisions that efforts to remove carbon from the atmosphere via natural and technological methods (like sequestering carbon in agricultural soils and scrubbing it from the atmosphere through direct air capture) outpace overall fossil fuel emissions.
Meeting the aspirational Paris goal of limiting global warming to 1.5°C (2.7°F) in SSP1-1.9 would be extremely challenging, given that global temperatures are expected to breach this level within about a decade. This scenario similarly envisions that global carbon emissions peak immediately and that they decline much faster than in SSP1-2.6, at a rate of about 6% per year from 2025 to 2035 and 9% per year over the following decade, reaching net zero by around the year 2055 and becoming net negative afterwards.
Global carbon dioxide emissions (in billions of tons per year) from 2015 to 2100 in each of the five SSP scenarios. (Source: IPCC Sixth Assessment Report)
For perspective, global carbon emissions fell by about 6-7% in 2020 as a result of restrictions associated with the COVID-19 pandemic and are expected to rebound by a similar amount in 2021. As IPCC report contributor Zeke Hausfather noted, this scenario also relies on large-scale carbon sequestration technologies that currently do not exist, without which global emissions would have to reach zero a decade sooner.
More warming means more risk
The new IPCC report details that, depending on the region, climate change has already worsened extreme heat, drought, fires, floods, and hurricanes, and those will only become more damaging and destructive as temperatures continue to rise. The IPCC’s 2018 “1.5°C Report” had entailed the differences in climate consequences in a 2°C vs. 1.5°C world, as summarized at this site by Bruce Lieberman.
Consider that in the current climate of just over 1°C (2°F) warmer than pre-industrial temperatures, 40 countries this summer alone have experienced extreme flooding, including more than a year’s worth of rain falling within 24 hours in Zhengzhou, China. Many regions have also experienced extreme heat, including the deadly Pacific Northwest heatwave and dangerously hot conditions during the Olympics in Tokyo. Siberia, Greece, Italy, and the US west coast are experiencing explosive wildfires, including the “truly frightening fire behavior” of the Dixie fire, which broke the record as the largest single wildfire on record in California. The IPCC report warned of “compound events” like heat exacerbating drought, which in turn fuels more dangerous wildfires, as is happening in California.
Western North America (WNA) and the Mediterranean (MED) regions are those for which climate scientists have the greatest confidence that human-caused global warming is exacerbating drought by drying out the soil. (Source: IPCC Sixth Assessment Report)The southwestern United States and Mediterranean are also among the regions for which climate scientists have the greatest confidence that climate change will continue to increase drought risk and severity. (Source: IPCC Sixth Assessment Report)
The IPCC report notes that the low-emissions SSP1 scenarios “would lead to substantially smaller changes” in these sorts of climate impact drivers than the higher-emissions scenarios. It also points out that with the world currently at around 1°C of warming, the intensity of extreme weather will be twice as bad compared to today’s conditions if temperatures reach 2°C (1°C hotter than today) than if the warming is limited to 1.5°C (0.5°C hotter than today), and quadruple as bad if global warming reaches 3°C (2°C hotter than today). For example, what was an extreme once-in-50-years heat wave in the late-1800s now occurs once per decade, which would rise to almost twice per decade at 1.5°C, and nearly three times per decade at 2°C global warming.
The increasing frequency and intensity of what used to be 1-in-50-year extreme heat as global temperatures rise. (Source: IPCC Sixth Assessment Report)
Climate’s fate has yet to be written
At the same time, there is no tipping point temperature at which it becomes “too late” to curb climate change and its damaging consequences. Every additional bit of global warming above current temperatures will result in increased risks of worsening extreme weather of the sorts currently being experienced around the world. Achieving the aspirational 1.5°C Paris target may be politically infeasible, but most countries (137 total) have either committed to or are in the process of setting a target for net zero emissions by 2050 (including the United States) or 2060 (including China).
That makes the SSP1 scenarios and limiting global warming to less than 2°C a distinct possibility, depending on how successful countries are at following through with decarbonization plans over the coming three decades. And with its proposed infrastructure bipartisan and budget reconciliation legislative plans – for which final enactment of each remains another big IF – the United States could soon implement some of the bold investments and policies necessary to set the world’s second-largest carbon polluter on a track consistent with the Paris targets.
Again and again, assessment after assessment, the IPCC has already made it clear. Climate change puts at risk every aspect of human life as we know it … We are already starting to experience those risks today; but we know what we need to do to avoid the worst future impacts. The difference between a fossil fuel versus a clean energy future is nothing less than the future of civilization as we know it.
Back to the Avengers: They had only one chance in 14 million to save the day, and they succeeded. Time is running short, but policymakers’ odds of meeting the Paris targets remain much better than that. There are no physical constraints playing the role of Thanos in our story; only political barriers stand between humanity and a prosperous clean energy future, although those can sometimes be the most difficult types of barriers to overcome.
The new IPCC report is “a code red for humanity”, says UN Secretary-General António Guterres.
Established in 1988 by United Nations Environment Programme (UNEP) and the World Meteorological Organisation (WMO), the Intergovernmental Panel on Climate Change (IPCC) assesses climate change science. Its new report is a warning sign for policy makers all over the world.
In this picture taken on 26 October, 2014, Peia Kararaua, 16, swims in the flooded area of Aberao village in Kiribati. Kiribati is one of the countries worst hit by the sea level rise since high tides mean many villages are inundated, making them uninhabitable. Image credit: UNICEF/Sokhin
This was the first time the approval meeting for the report was conducted online. There were 234 authors from the world over who clocked in 186 hours working together to get this report released.
For the first time, the report offers an interactive atlas for people to see what has already happened and what may happen in the future to where they live.
“This report tells us that recent changes in the climate are widespread, rapid and intensifying, unprecedented in thousands of years,” said IPCC Vice-Chair Ko Barrett.
UNEP Executive Director Inger Andersen that scientists have been issuing these messages for more than three decades, but the world hasn’t listened.
Here are the most important takeaways from the report:
Humans are to be blamed
Human activity is the cause of climate change and this is an unequivocal fact. All the warming caused in the pre-industrial times had been generated by the burning of fossil fuels such as coal, oil, wood, and natural gas.
Global temperatures have already risen by 1.1 degrees Celsius since the 19th century. They have reached their highest in over 100,000 years, and only a fraction of that increase has come from natural forces.
Michael Mann told the Independentthe effects of climate change will be felt in all corners of the world and will worsen, especially since “the IPCC has connected the dots on climate change and the increase in severe extreme weather events… considerably more directly than previous assessments.”
We will overshoot the 1.5 C mark
According to the report’s highly optimistic-to-reckless scenarios, even if we do everything right and start reducing emissions now, we will still overshoot the 1.5C mark by 2030. But, we will see a drop in temperatures to around 1.4 C.
Control emissions, Earth will do the rest
According to the report, if we start working to bring our emissions under control, we will be able to decrease warming, even if we overshoot the 1.5C limit.
The changes we are living through are unprecedented; however, they are reversible to a certain extent. And it will take a lot of time for nature to heal. We can do this by reducing our greenhouse gas (GHG) emissions. While we might see some benefits quickly, “it could take 20-30 years to see global temperatures stabilise” says the IPCC.
Sea level rise
Global oceans have risen about 20 centimetres (eight inches) since 1900, and the rate of increase has nearly tripled in the last decade. Crumbling and melting ice sheets atop Antarctica (especially in Greenland) have replaced glacier melt as the main drivers.
If global warming is capped at 2 C, the ocean watermark will go up about half a metre over the 21st century. It will continue rising to nearly two metres by 2300 — twice the amount predicted by the IPCC in 2019.
Because of uncertainty over ice sheets, scientists cannot rule out a total rise of two metres by 2100 in a worst-case emissions scenario.
CO2 is at all-time high
CO2 levels were greater in 2019 than they had been in “at least two million years.” Methane and nitrous oxide levels, the second and third major contributors of warming respectively, were higher in 2019 than at any point in “at least 800,000 years,” reported the Independent.
Control methane
The report includes more data than ever before on methane (CH4), the second most important greenhouse gas after CO2, and warns that failure to curb emissions could undermine Paris Agreement goals.
Human-induced sources are roughly divided between leaks from natural gas production, coal mining and landfills on one side, and livestock and manure handling on the other.
CH4 lingers in the atmosphere only a fraction as long as CO2, but is far more efficient at trapping heat. CH4 levels are their highest in at least 800,000 years.
Natural allies are weakened
Since about 1960, forests, soil and oceans have absorbed 56 percent of all the CO2 humanity has released into the atmosphere — even as those emissions have increased by half. Without nature’s help, Earth would already be a much hotter and less hospitable place.
But these allies in our fight against global heating — known in this role as carbon sinks — are showing signs of saturatation, and the percentage of human-induced carbon they soak up is likely to decline as the century unfolds.
Suck it out
The report suggests that warming could be brought back down via “negative emissions.” We could cool down the planet by sucking out or sequestering the carbon from the atmosphere. While this is a viable suggestion that has been thrown around and there have been small-scale studies that have tried to do this, the technology is not yet perfect. The panel said that could be done starting about halfway through this century but doesn’t explain how, and many scientists are skeptical about its feasibility.
Cities will bear the brunt
Expertswarn that the impact of some elements of climate change, like heat, floods and sea-level rise in coastal areas, may be exacerbated in cities. Furthermore, IPCC experts warn that low-probability scenarios, like an ice sheet collapse or rapid changes in ocean circulation, cannot be ruled out.
Na sua opinião, o que aconteceu nos últimos cem anos com o número total de mortes causadas por furacões, inundações, secas, ondas de calor e outros desastres climáticos? Peço que escolha uma destas alternativas:
a) Aumentou mais de 800%
b) Aumentou cerca de 50%
c) Manteve-se constante
d) Diminuiu cerca de 50%
e) Diminuiu mais de 80%
Como a população mundial cresceu de 1,8 bilhão em 1921 para 8 bilhões em 2021, é razoável cravar as respostas B ou C, pois o fato de haver mais pessoas resultaria em mais vítimas. Muitos leitores devem ter escolhido a primeira opção, tendo em vista as notícias assustadoras do relatório do IPCC desta semana.
A alternativa correta, porém, é a última. As mortes por desastres naturais diminuíram 87% desde a década de 1920 até os anos 2010, segundo dados coletados pelo Our World in Data.
Passaram de 540 mil por ano para 68 mil. A taxa em relação à população teve picos de 63 mortes por 100 mil habitantes em 1921, e 176 em 1931. Hoje está em 0,15.
Esses números levam a dois paradoxos interessantes sobre a relação entre o homem e o clima. O primeiro lembra o Paradoxo de Spencer –referência a Herbert Spencer, para quem “o grau de preocupação pública sobre um problema ou fenômeno social varia inversamente a sua incidência”.
Assim como os ingleses se deram conta da pobreza quando ela começava a diminuir, durante a Revolução Industrial, a humanidade está apavorada com os infortúnios do clima justamente depois de conseguir sobreviver a eles.
O segundo paradoxo: ao mesmo tempo em que emitimos muito (mas muito mesmo) carbono na atmosfera e causamos um grave problema de efeito estufa, também nos tornamos menos vulneráveis à natureza. Na verdade, proteger-se do clima foi um dos principais motivos para termos poluído tanto.
Veja o caso da construção. Produzir cimento consiste grosseiramente em queimar calcário e liberar dióxido de carbono.
Se a indústria de cimento fosse um país, seria o terceiro maior emissor de gases do efeito estufa. Mas essa indústria poluidora permitiu que as pessoas deixassem casas de pau-a-pique ou madeira para dormirem abrigadas em estruturas mais seguras.
Já a fome originada pela seca, principal causa de morte por desastres naturais nos anos 1920, foi resolvida com a criação dos fertilizantes químicos, sistemas de irrigação e a construção de represas e redes de saneamento.
Todas essas atividades causaram aquecimento global –mas não deixam de ser grandes conquistas humanas, que merecem ser celebradas e difundidas entre os pobres que ainda vivem sob risco de morrer durante furacões, secas ou inundações.
Será que a queda histórica das mortes por desastres naturais vai se reverter nos próximos anos, tornando realidade os vaticínios apocalípticos de Greta Thunberg, para quem “bilhões de pessoas morrerão se não tomarmos medidas urgentes”?
O ativista climático Michael Shellenberger, autor do brilhante “Apocalipse Nunca”, que será lançado este mês no Brasil pela editora LVM, acha que não.
Pretendo falar mais sobre o livro de Shellenberger em outras colunas, mas já adianto um dos argumentos: o alarmismo ambiental despreza a capacidade humana de se adaptar e resolver problemas.
“Os Países Baixos, por exemplo, tornaram-se uma nação rica mesmo tendo um terço de suas terras abaixo do nível do mar, incluindo áreas que estão nada menos do que sete metros abaixo do mar”, diz ele.
A luta contra o aquecimento global não precisa de ativistas obcecados com o apocalipse (que geralmente desprezam soluções óbvias, como a energia nuclear). Precisa de tecnologia, de inovadores, de gente que dê mais conforto e segurança à humanidade interferindo na natureza cada vez menos.
Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?
Bobbie Johnson
February 26, 2020
Inez Fung
Professor of atmospheric science, University of California, Berkeley
Leah Fasten
Prediction for 2030: We’ll light up the world… safely
I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”
I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.
Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.
I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.
Dvora Photography
Anne Lise Kjaer
Futurist, Kjaer Global, London
Prediction for 2030: Adults will learn to grasp new ideas
As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.
When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.
A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.
So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?
What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?
Courtesy Photo
Philip Tetlock
Coauthor of Superforecasting and professor, University of Pennsylvania
Prediction for 2030: We’ll get better at being uncertain
At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstraction—but from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.
Forecasts have the potential to be either self-fulfilling or self-negating—Y2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?
What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.
Ryan Young
Keith Chen
Associate professor of economics, UCLA
Prediction for 2030: We’ll be more—and less—private
When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.
These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?
I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.
Sarah Deragon
Annalee Newitz
Science fiction and nonfiction author, San Francisco
Prediction for 2030: We’re going to see a lot more humble technology
Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.
Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.
There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.
It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.
I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.
We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.
Noah Willman
Finale Doshi-Velez
Associate professor of computer science, Harvard
Prediction for 2030: Humans and machines will make decisions together
In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.
Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.
There’s been about a decade of work trying to get unsupervised machine-learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.
We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.
I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.
All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.
Guillaume Simoneau
Abdoulaye Banire Diallo
Professor, director of the bioinformatics lab, University of Quebec at Montreal
Prediction for 2030: Machine-based forecasting will be regulated
When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecasting—to help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.
With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.
Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.
Death Table from Tuberculosis in the United States, prepared for the International Congress on Tuberculosis, September 21 to October 12, 1908. Image: U.S. National Library of Medicine
Contrary to hopes for a tidy conclusion to the COVID-19 pandemic, history shows that outbreaks of infectious disease often have much murkier outcomes—including simply being forgotten about, or dismissed as someone else’s problem.
Recent history tells us a lot about how epidemics unfold, how outbreaks spread, and how they are controlled. We also know a good deal about beginnings—those first cases of pneumonia in Guangdong marking the SARS outbreak of 2002–3, the earliest instances of influenza in Veracruz leading to the H1N1 influenza pandemic of 2009–10, the outbreak of hemorrhagic fever in Guinea sparking the Ebola pandemic of 2014–16. But these stories of rising action and a dramatic denouement only get us so far in coming to terms with the global crisis of COVID-19. The coronavirus pandemic has blown past many efforts at containment, snapped the reins of case detection and surveillance across the world, and saturated all inhabited continents. To understand possible endings for this epidemic, we must look elsewhere than the neat pattern of beginning and end—and reconsider what we mean by the talk of “ending” epidemics to begin with.
The social lives of epidemics show them to be not just natural phenomena but also narrative ones: deeply shaped by the stories we tell about their beginnings, their middles, their ends.
Historians have long been fascinated by epidemics in part because, even where they differ in details, they exhibit a typical pattern of social choreography recognizable across vast reaches of time and space. Even though the biological agents of the sixth-century Plague of Justinian, the fourteenth-century Black Death, and the early twentieth-century Manchurian Plague were almost certainly not identical, the epidemics themselves share common features that link historical actors to present experience. “As a social phenomenon,” the historian Charles Rosenberg has argued, “an epidemic has a dramaturgic form. Epidemics start at a moment in time, proceed on a stage limited in space and duration, following a plot line of increasing and revelatory tension, move to a crisis of individual and collective character, then drift towards closure.” And yet not all diseases fit so neatly into this typological structure. Rosenberg wrote these words in 1992, nearly a decade into the North American HIV/AIDS epidemic. His words rang true about the origins of that disease—thanks in part to the relentless, overzealous pursuit of its “Patient Zero”—but not so much about its end, which was, as for COVID-19, nowhere in sight.
In the case of the new coronavirus, we have now seen an initial fixation on origins give way to the question of endings. In March The Atlantic offered four possible “timelines for life returning to normal,” all of which depended the biological basis of a sufficient amount of the population developing immunity (perhaps 60 to 80 percent) to curb further spread. This confident assertion derived from models of infectious outbreaks formalized by epidemiologists such as W. H. Frost a century earlier. If the world can be defined into those susceptible (S), infected (I) and resistant (R) to a disease, and a pathogen has a reproductive number R0 (pronounced R-naught) describing how many susceptible people can be infected by a single infected person, the end of the epidemic begins when the proportion of susceptible people drops below the reciprocal, 1/R0. When that happens, one person would infect, on average, less than one other person with the disease.
These formulas reassure us, perhaps deceptively. They conjure up a set of natural laws that give order to the cadence of calamities. The curves produced by models, which in better times belonged to the arcana of epidemiologists, are now common figures in the lives of billions of people learning to live with contractions of civil society promoted in the name of “bending,” “flattening,” or “squashing” them. At the same time, as David Jones and Stefan Helmreich recently wrote in these pages, the smooth lines of these curves are far removed from jagged realities of the day-to-day experience of an epidemic—including the sharp spikes in those “reopening” states where modelers had predicted continued decline.
In other words, epidemics are not merely biological phenomena. They are inevitably framed and shaped by our social responses to them, from beginning to end (whatever that may mean in any particular case). The questions now being asked of scientists, clinicians, mayors, governors, prime ministers, and presidents around the world is not merely “When will the biological phenomenon of this epidemic resolve?” but rather “When, if ever, will the disruption to our social life caused in the name of coronavirus come to an end?” As peak incidence nears, and in many places appears to have passed, elected officials and think tanks from opposite ends of the political spectrum provide “roadmaps” and “frameworks” for how an epidemic that has shut down economic, civic, and social life in a manner not seen globally in at least a century might eventually recede and allow resumption of a “new normal.”
To understand possible endings for this epidemic, we must look elsewhere than the neat pattern of beginning and end—and reconsider what we mean by the talk of “ending” epidemics to begin with.
These two faces of an epidemic, the biological and the social, are closely intertwined, but they are not the same. The biological epidemic can shut down daily life by sickening and killing people, but the social epidemic also shuts down daily life by overturning basic premises of sociality, economics, governance, discourse, interaction—and killing people in the process as well. There is a risk, as we know from both the Spanish influenza of 1918–19 and the more recent swine flu of 2008–9, of relaxing social responses before the biological threat has passed. But there is also a risk in misjudging a biological threat based on faulty models or bad data and in disrupting social life in such a way that the restrictions can never properly be taken back. We have seen in the case of coronavirus the two faces of the epidemic escalating on local, national, and global levels in tandem, but the biological epidemic and the social epidemic don’t necessarily recede on the same timeline.
For these sorts of reasons we must step back and reflect in detail on what we mean by ending in the first place. The history of epidemic endings has taken many forms, and only a handful of them have resulted in the elimination of a disease.
History reminds us that the interconnections between the timing of the biological and social epidemics are far from obvious. In some cases, like the yellow fever epidemics of the eighteenth century and the cholera epidemics of the nineteenth century, the dramatic symptomatology of the disease itself can make its timing easy to track. Like a bag of popcorn popping in the microwave, the tempo of visible case-events begins slowly, escalates to a frenetic peak, and then recedes, leaving a diminishing frequency of new cases that eventually are spaced far enough apart to be contained and then eliminated. In other examples, however, like the polio epidemics of the twentieth century, the disease process itself is hidden, often mild in presentation, threatens to come back, and ends not on a single day but over different timescales and in different ways for different people.
Campaigns against infectious diseases are often discussed in military terms, and one result of that metaphor is to suggest that epidemics too must have a singular endpoint. We approach the infection peak as if it were a decisive battle like Waterloo, or a diplomatic arrangement like the Armistice at Compiègne in November 1918. Yet the chronology of a single, decisive ending is not always true even for military history, of course. Just as the clear ending of a military war does not necessarily bring a close to the experience of war in everyday life, so too the resolution of the biological epidemic does not immediately undo the effects of the social epidemic. The social and economic effects of the 1918–1919 pandemic, for example, were felt long after the end of the third and putatively final wave of the virus. While the immediate economic effect on many local businesses caused by shutdowns appears to have resolved in a matter of months, the broader economic effects of the epidemic on labor-wage relations were still visible in economic surveys in 1920, again in 1921, and in several areas as far as 1930.
The history of epidemic endings has taken many forms, and only a handful of them have resulted in the elimination of a disease.
And yet, like World War One with which its history was so closely intertwined, the influenza pandemic of 1918–19 appeared at first to have a singular ending. In individual cities the epidemic often produced dramatic spikes and falls in equally rapid tempo. In Philadelphia, as John Barry notes in The Great Influenza (2004), after an explosive and deadly rise in October 1919 that peaked at 4,597 deaths in a single week, cases suddenly dropped so precipitously that the public gathering ban could be lifted before the month was over, with almost no new cases in following weeks. A phenomenon whose destructive potential was limited by material laws, “the virus burned through available fuel, then it quickly faded away.”
As Barry reminds us, however, scholars have since learned to differentiate at least three different sequences of epidemics within the broader pandemic. The first wave blazed through military installations in the spring of 1918, the second wave caused the devastating mortality spikes in the summer and fall of 1918, and the third wave began in December 1918 and lingered long through the summer of 1919. Some cities, like San Francisco, passed through the first and second waves relatively unscathed only to be devastated by the third wave. Nor was it clear to those still alive in 1919 that the pandemic was over after the third wave receded. Even as late as 1922, a bad flu season in Washington State merited a response from public health officials to enforce absolute quarantine as they had during 1918–19. It is difficult, looking back, to say exactly when this prototypical pandemic of the twentieth century was really over.
Who can tell when a pandemic has ended? Today, strictly speaking, only the World Health Organization (WHO). The Emergency Committee of the WHO is responsible for the global governance of health and international coordination of epidemic response. After the SARS coronavirus pandemic of 2002–3, this body was granted sole power to declare the beginnings and endings of Public Health Emergencies of International Concern (PHEIC). While SARS morbidity and mortality—roughly 8,000 cases and 800 deaths in 26 countries—has been dwarfed by the sheer scale of COVID-19, the pandemic’s effect on national and global economies prompted revisions to the International Health Regulations in 2005, a body of international law that had remained unchanged since 1969. This revision broadened the scope of coordinated global response from a handful of diseases to any public health event that the WHO deemed to be of international concern and shifted from a reactive response framework to a pro-active one based on real-time surveillance and detection and containment at the source rather than merely action at international borders.
This social infrastructure has important consequences, not all of them necessarily positive. Any time the WHO declares a public health event of international concern—and frequently when it chooses not to declare one—the event becomes a matter of front-page news. Since the 2005 revision, the group has been criticized both for declaring a PHEIC too hastily (as in the case of H1N1) or too late (in the case of Ebola). The WHO’s decision to declare the end of a PHEIC, by contrast, is rarely subject to the same public scrutiny. When an outbreak is no longer classified as an “extraordinary event” and no longer is seen to pose a risk at international spread, the PHEIC is considered not to be justified, leading to a withdrawal of international coordination. Once countries can grapple with the disease within their own borders, under their own national frameworks, the PHEIC is quietly de-escalated.
At their worst, epidemic endings are a form of collective amnesia, transmuting the disease that remains into merely someone else’s problem.
As the response to the 2014–16 Ebola outbreak in West Africa demonstrates, however, the act of declaring the end of a pandemic can be just as powerful as the act of declaring its beginning—in part because emergency situations can continue even after a return to “normal” has been declared. When WHO Director General Margaret Chan announced in March 2016 that the Ebola outbreak was no longer a public health event of international concern, international donors withdrew funds and care to the West African countries devastated by the outbreak, even as these struggling health systems continued to be stretched beyond their means by the needs of Ebola survivors. NGOs and virologists expressed concern that efforts to fund Ebola vaccine development would likewise fade without a sense of global urgency pushing research forward.
Part of the reason that the role of the WHO in proclaiming and terminating the state of pandemic is subject to so much scrutiny is that it can be. The WHO is the only global health body that is accountable to all governments of the world; its parliamentary World Health Assembly contains health ministers from every nation. Its authority rests not so much on its battered budget as its access to epidemic intelligence and pool of select individuals, technical experts with vast experience in epidemic response. But even though internationally sourced scientific and public health authority is key to its role in pandemic crises, WHO guidance is ultimately carried out in very different ways and on very different time scales in different countries, provinces, states, counties, and cities. One state might begin to ease up restrictions to movement and industry just as another implements more and more stringent measures. If each country’s experience of “lockdown” has already been heterogeneous, the reconnection between them after the PHEIC is ended will likely show even more variance.
So many of our hopes for the termination of the present PHEIC now lie in the promise of a COVID-19 vaccine. Yet a closer look at one of the central vaccine success stories of the twentieth century shows that technological solutions rarely offer resolution to pandemics on their own. Contrary to our expectations, vaccines are not universal technologies. They are always deployed locally, with variable resources and commitments to scientific expertise. International variations in research, development, and dissemination of effective vaccines are especially relevant in the global fight against epidemic polio.
The development of the polio vaccine is relatively well known, usually told as a story of an American tragedy and triumph. Yet while polio epidemics that swept the globe in the postwar decades did not respect national borders or the Iron Curtain, the Cold War provided context for both collaboration and antagonism. Only a few years after the licensing of Jonas Salk’s inactivated vaccine in the United States, his technique became widely used across the world, although its efficacy outside of the United States was questioned. The second, live oral vaccine developed by Albert Sabin, however, involved extensive collaboration in with Eastern European and Soviet colleagues. As the success of the Soviet polio vaccine trials marked a rare landmark of Cold War cooperation, Basil O’Connor, president of the March of Dimes movement, speaking at the Fifth International Poliomyelitis Conference in 1960, proclaimed that “in search for the truth that frees man from disease, there is no cold war.”
Two faces of an epidemic, the biological and the social, are closely intertwined, but they are not the same.
Yet the differential uptake of this vaccine retraced the divisions of Cold War geography. The Soviet Union, Hungary, and Czechoslovakia were the first countries in the world to begin nationwide immunization with the Sabin vaccine, soon followed by Cuba, the first country in the Western Hemisphere to eliminate the disease. By the time the Sabin vaccine was licensed in the United States in 1963, much of Eastern Europe had done away with epidemics and was largely polio-free. The successful ending of this epidemic within the communist world was immediately held up as proof of the superiority of their political system.
Western experts who trusted the Soviet vaccine trials, including the Yale virologist and WHO envoy Dorothy Horstmann, nonetheless emphasized that their results were possible because of the military-like organization of the Soviet health care system. Yet these enduring concerns that authoritarianism itself was the key tool for ending epidemics—a concern reflected in current debates over China’s heavy-handed interventions in Wuhan this year—can also be overstated. The Cold War East was united not only by authoritarianism and heavy hierarchies in state organization and society, but also by a powerful shared belief in the integration of paternal state, biomedical research, and socialized medicine. Epidemic management in these countries combined an emphasis on prevention, easily mobilized health workers, top-down organization of vaccinations, and a rhetoric of solidarity, all resting on a health care system that aimed at access to all citizens.
Still, authoritarianism as a catalyst for controlling epidemics can be singled out and pursued with long-lasting consequences. Epidemics can be harbingers of significant political changes that go well beyond their ending, significantly reshaping a new “normal” after the threat passes. Many Hungarians, for example, have watched with alarm the complete sidelining of parliament and the introduction of government by decree at the end of March this year. The end of any epidemic crisis, and thus the end of the need for the significantly increased power of Viktor Orbán, would be determined by Orbán himself. Likewise, many other states, urging the mobilization of new technologies as a solution to end epidemics, are opening the door to heightened state surveillance of their citizens. The apps and trackers now being designed to follow the movement and exposure of people in order to enable the end of epidemic lockdowns can collect data and establish mechanisms that reach well beyond the original intent. The digital afterlives of these practices raise new and unprecedented questions about when and how epidemics end.
Like infectious agents on an agar plate, epidemics colonize our social lives and force us to learn to live with them, in some way or another, for the foreseeable future.
Although we want to believe that a single technological breakthrough will end the present crisis, the application of any global health technology is always locally determined. After its dramatic successes in managing polio epidemics in the late 1950s and early 1960s, the oral poliovirus vaccine became the tool of choice for the Global Polio Eradication Initiative in the late 1980s, as it promised an end to “summer fears” globally. But since vaccines are in part technologies of trust, ending polio outbreaks depends on maintaining confidence in national and international structures through which vaccines are delivered. Wherever that often fragile trust is fractured or undermined, vaccination rates can drop to a critical level, giving way to vaccine-derived polio, which thrives in partially vaccinated populations.
In Kano, Nigeria, for example, a ban on polio vaccination between 2000 and 2004 resulted in a new national polio epidemic that soon spread to neighboring countries. As late as December 2019 polio outbreaks were still reported in fifteen African countries, including Angola and the Democratic Republic of the Congo. Nor is it clear that polio can fully be regarded as an epidemic at this point: while polio epidemics are now a thing of the past for Hungary—and the rest of Europe, the Americas, Australia, and East Asia as well—the disease is still endemic to parts of Africa and South Asia. A disease once universally epidemic is now locally endemic: this, too, is another way that epidemics end.
Indeed, many epidemics have only “ended” through widespread acceptance of a newly endemic state. Consider the global threat of HIV/AIDS. From a strictly biological perspective, the AIDS epidemic has never ended; the virus continues to spread devastation through the world, infecting 1.7 million people and claiming an estimated 770,000 lives in the year 2018 alone. But HIV is not generally described these days with the same urgency and fear that accompanied the newly defined AIDS epidemic in the early 1980s. Like coronavirus today, AIDS at that time was a rapidly spreading and unknown emerging threat, splayed across newspaper headlines and magazine covers, claiming the lives of celebrities and ordinary citizens alike. Nearly forty years later it has largely become a chronic disease endemic, at least in the Global North. Like diabetes, which claimed an estimated 4.9 million lives in 2019, HIV/AIDS became a manageable condition—if one had access to the right medications.
Those who are no longer directly threatened by the impact of the disease have a hard time continuing to attend to the urgency of an epidemic that has been rolling on for nearly four decades. Even in the first decade of the AIDS epidemic, activists in the United States fought tooth and nail to make their suffering visible in the face of both the Reagan administration’s dogged refusal to talk publicly about the AIDS crisis and the indifference of the press after the initial sensation of the newly discovered virus had become common knowledge. In this respect, the social epidemic does not necessarily end when biological transmission has ended, or even peaked, but rather when, in the attention of the general public and in the judgment of certain media and political elites who shape that attention, the disease ceases to be newsworthy.
Though we like to think of science as universal and objective, crossing borders and transcending differences, it is in fact deeply contingent upon local practices.
Polio, for its part, has not been newsworthy for a while, even as thousands around the world still live with polio with ever-decreasing access to care and support. Soon after the immediate threat of outbreaks passed, so did support for those whose lives were still bound up with the disease. For others, it became simply a background fact of life—something that happens elsewhere. The polio problem was “solved,” specialized hospitals were closed, fundraising organizations found new causes, and poster children found themselves in an increasingly challenging world. Few medical professionals are trained today in the treatment of the disease. As intimate knowledge of polio and its treatment withered away with time, people living with polio became embodied repositories of lost knowledge.
History tells us public attention is much more easily drawn to new diseases as they emerge rather than sustained over the long haul. Well before AIDS shocked the world into recognizing the devastating potential of novel epidemic diseases, a series of earlier outbreaks had already signaled the presence of emerging infectious agents. When hundreds of members of the American Legion fell ill after their annual meeting in Philadelphia in 1976, the efforts of epidemiologists from the Centers for Disease Control to explain the spread of this mysterious disease and its newly discovered bacterial agent, Legionella, occupied front-page headlines. In the years since, however, as the 1976 incident faded from memory, Legionella infections have become everyday objects of medical care, even though incidence in the U.S. has grown ninefold since 2000, tracing a line of exponential growth that looks a lot like COVID-19’s on a longer time scale. Yet few among us pause in our daily lives to consider whether we are living through the slowly ascending limb of a Legionella epidemic.
Nor do most people living in the United States stop to consider the ravages of tuberculosis as a pandemic, even though an estimated 10 million new cases of tuberculosis were reported around the globe in 2018, and an estimated 1.5 million people died from the disease. The disease seems to only receive attention in relation to newer scourges: in the late twentieth century TB coinfection became a leading cause of death in emerging HIV/AIDS pandemic, while in the past few months TB coinfection has been invoked as a rising cause of mortality in COVID-19 pandemic. Amidst these stories it is easy to miss that on its own, tuberculosis has been and continues to be the leading cause of death worldwide from a single infectious agent. And even though tuberculosis is not an active concern of middle-class Americans, it is still not a thing of the past even in this country. More than 9,000 cases of tuberculosis were reported in the United States in 2018—overwhelmingly affecting racial and ethnic minority populations—but they rarely made the news.
There will be no simple return to the way things were: whatever normal we build will be a new one—whether many of us realize it or not.
While tuberculosis is the target of concerted international disease control efforts, and occasionally eradication efforts, the time course of this affliction has been spread out so long—and so clearly demarcated in space as a problem of “other places”—that it is no longer part of the epidemic imagination of the Global North. And yet history tells a very different story. DNA lineage studies of tuberculosis now show that the spread of tuberculosis in sub-Saharan Africa and Latin America was initiated by European contact and conquest from the fifteenth century through the nineteenth. In the early decades of the twentieth century, tuberculosis epidemics accelerated throughout sub-Saharan Africa, South Asia, and Southeast Asia due to the rapid urbanization and industrialization of European colonies. Although the wave of decolonizations that swept these regions between the 1940s and the 1980s established autonomy and sovereignty for newly post-colonial nations, this movement did not send tuberculosis back to Europe.
These features of the social lives of epidemics—how they live on even when they seem, to some, to have disappeared—show them to be not just natural phenomena but also narrative ones: deeply shaped by the stories we tell about their beginnings, their middles, their ends. At their best, epidemic endings are a form of relief for the mainstream “we” that can pick up the pieces and reconstitute a normal life. At their worst, epidemic endings are a form of collective amnesia, transmuting the disease that remains into merely someone else’s problem.
What are we to conclude from these complex interactions between the social and the biological faces of epidemics, past and present? Like infectious agents on an agar plate, epidemics colonize our social lives and force us to learn to live with them, in some way or another, for the foreseeable future. Just as the postcolonial period continued to be shaped by structures established under colonial rule, so too are our post-pandemic futures indelibly shaped by what we do now. There will be no simple return to the way things were: whatever normal we build will be a new one—whether many of us realize it or not. Like the world of scientific facts after the end of a critical experiment, the world that we find after an the end of an epidemic crisis—whatever we take that to be—looks in many ways like the world that came before, but with new social truths established. How exactly these norms come into being depends a great deal on particular circumstances: current interactions among people, the instruments of social policy as well as medical and public health intervention with which we apply our efforts, and the underlying response of the material which we applied that apparatus against (in this case, the coronavirus strain SARS-CoV-2). While we cannot know now how the present epidemic will end, we can be confident that it in its wake it will leave different conceptions of normal in realms biological and social, national and international, economic and political.
Though we like to think of science as universal and objective, crossing borders and transcending differences, it is in fact deeply contingent upon local practices—including norms that are easily thrown over in an emergency, and established conventions that do not always hold up in situations of urgency. Today we see civic leaders jumping the gun in speaking of access to treatments, antibody screens, and vaccines well in advance of any scientific evidence, while relatively straightforward attempts to estimate the true number of people affected by the disease spark firestorms over the credibility of medical knowledge. Arduous work is often required to achieve scientific consensus, and when the stakes are high—especially when huge numbers of lives are at risk—heterogeneous data give way to highly variable interpretations. As data moves too quickly in some domains and too slowly in others, and sped-up time pressures are placed on all investigations the projected curve of the epidemic is transformed into an elaborate guessing game, in which different states rely on different kinds of scientific claims to sketch out wildly different timetables for ending social restrictions.
The falling action of an epidemic is perhaps best thought of as asymptotic: never disappearing, but rather fading to the point where signal is lost in the noise of the new normal—and even allowed to be forgotten.
These varied endings of the epidemic across local and national settings will only be valid insofar as they are acknowledged as such by others—especially if any reopening of trade and travel is to be achieved. In this sense, the process of establishing a new normal in global commerce will continue to be bound up in practices of international consensus. What the new normal in global health governance will look like, however, is more uncertain than ever. Long accustomed to the role of international scapegoat, the WHO Secretariat seems doomed to be accused either of working beyond its mandate or not acting fast enough. Moreover, it can easily become a target of scapegoating, as the secessionist posturing of Donald Trump demonstrates. Yet the U.S. president’s recent withdrawal from this international body is neither unprecedented nor unsurmountable. Although Trump’s voting base might not wish to be grouped together with the only other global power to secede from the WHO, after the Soviet Union’s 1949 departure from the group it ultimately brought all Eastern Bloc back to task of international health leadership in 1956. Much as the return of the Soviets to the WHO resulted in the global eradication of smallpox—the only human disease so far to have been intentionally eradicated—it is possible that some future return of the United States to the project of global health governance might also result in a more hopeful post-pandemic future.
As the historians at the University of Oslo have recently noted, in epidemic periods “the present moves faster, the past seems further removed, and the future seems completely unpredictable.” How, then, are we to know when epidemics end? How does the act of looking back aid us in determining a way forward? Historians make poor futurologists, but we spend a lot of time thinking about time. And epidemics produce their own kinds of time, in both biological and social domains, disrupting our individual senses of passing days as well as conventions for collective behavior. They carry within them their own tempos and rhythms: the slow initial growth, the explosive upward limb of the outbreak, the slowing of transmission that marks the peak, plateau, and the downward limb. This falling action is perhaps best thought of as asymptotic: rarely disappearing, but rather fading to the point where signal is lost in the noise of the new normal—and even allowed to be forgotten.
This storm will pass. But the choices we make now could change our lives for years to come.
Yuval Noah Harari – March 20, 2020
Humankind is now facing a global crisis. Perhaps the biggest crisis of our generation. The decisions people and governments take in the next few weeks will probably shape the world for years to come. They will shape not just our healthcare systems but also our economy, politics and culture. We must act quickly and decisively. We should also take into account the long-term consequences of our actions. When choosing between alternatives, we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes. Yes, the storm will pass, humankind will survive, most of us will still be alive — but we will inhabit a different world.
Many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes. Decisions that in normal times could take years of deliberation are passed in a matter of hours. Immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger. Entire countries serve as guinea-pigs in large-scale social experiments. What happens when everybody works from home and communicates only at a distance? What happens when entire schools and universities go online? In normal times, governments, businesses and educational boards would never agree to conduct such experiments. But these aren’t normal times.
In this time of crisis, we face two particularly important choices. The first is between totalitarian surveillance and citizen empowerment. The second is between nationalist isolation and global solidarity.
Under-the-skin surveillance
In order to stop the epidemic, entire populations need to comply with certain guidelines. There are two main ways of achieving this. One method is for the government to monitor people, and punish those who break the rules. Today, for the first time in human history, technology makes it possible to monitor everyone all the time. Fifty years ago, the KGB couldn’t follow 240m Soviet citizens 24 hours a day, nor could the KGB hope to effectively process all the information gathered. The KGB relied on human agents and analysts, and it just couldn’t place a human agent to follow every citizen. But now governments can rely on ubiquitous sensors and powerful algorithms instead of flesh-and-blood spooks.
In their battle against the coronavirus epidemic several governments have already deployed the new surveillance tools. The most notable case is China. By closely monitoring people’s smartphones, making use of hundreds of millions of face-recognising cameras, and obliging people to check and report their body temperature and medical condition, the Chinese authorities can not only quickly identify suspected coronavirus carriers, but also track their movements and identify anyone they came into contact with. A range of mobile apps warn citizens about their proximity to infected patients.
About the photography
The images accompanying this article are taken from webcams overlooking the deserted streets of Italy, found and manipulated by Graziano Panfili, a photographer living under lockdown
This kind of technology is not limited to east Asia. Prime Minister Benjamin Netanyahu of Israel recently authorised the Israel Security Agency to deploy surveillance technology normally reserved for battling terrorists to track coronavirus patients. When the relevant parliamentary subcommittee refused to authorise the measure, Netanyahu rammed it through with an “emergency decree”.
You might argue that there is nothing new about all this. In recent years both governments and corporations have been using ever more sophisticated technologies to track, monitor and manipulate people. Yet if we are not careful, the epidemic might nevertheless mark an important watershed in the history of surveillance. Not only because it might normalise the deployment of mass surveillance tools in countries that have so far rejected them, but even more so because it signifies a dramatic transition from “over the skin” to “under the skin” surveillance.
Hitherto, when your finger touched the screen of your smartphone and clicked on a link, the government wanted to know what exactly your finger was clicking on. But with coronavirus, the focus of interest shifts. Now the government wants to know the temperature of your finger and the blood-pressure under its skin.
The emergency pudding
One of the problems we face in working out where we stand on surveillance is that none of us know exactly how we are being surveilled, and what the coming years might bring. Surveillance technology is developing at breakneck speed, and what seemed science-fiction 10 years ago is today old news. As a thought experiment, consider a hypothetical government that demands that every citizen wears a biometric bracelet that monitors body temperature and heart-rate 24 hours a day. The resulting data is hoarded and analysed by government algorithms. The algorithms will know that you are sick even before you know it, and they will also know where you have been, and who you have met. The chains of infection could be drastically shortened, and even cut altogether. Such a system could arguably stop the epidemic in its tracks within days. Sounds wonderful, right?
The downside is, of course, that this would give legitimacy to a terrifying new surveillance system. If you know, for example, that I clicked on a Fox News link rather than a CNN link, that can teach you something about my political views and perhaps even my personality. But if you can monitor what happens to my body temperature, blood pressure and heart-rate as I watch the video clip, you can learn what makes me laugh, what makes me cry, and what makes me really, really angry.
It is crucial to remember that anger, joy, boredom and love are biological phenomena just like fever and a cough. The same technology that identifies coughs could also identify laughs. If corporations and governments start harvesting our biometric data en masse, they can get to know us far better than we know ourselves, and they can then not just predict our feelings but also manipulate our feelings and sell us anything they want — be it a product or a politician. Biometric monitoring would make Cambridge Analytica’s data hacking tactics look like something from the Stone Age. Imagine North Korea in 2030, when every citizen has to wear a biometric bracelet 24 hours a day. If you listen to a speech by the Great Leader and the bracelet picks up the tell-tale signs of anger, you are done for.
You could, of course, make the case for biometric surveillance as a temporary measure taken during a state of emergency. It would go away once the emergency is over. But temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon. My home country of Israel, for example, declared a state of emergency during its 1948 War of Independence, which justified a range of temporary measures from press censorship and land confiscation to special regulations for making pudding (I kid you not). The War of Independence has long been won, but Israel never declared the emergency over, and has failed to abolish many of the “temporary” measures of 1948 (the emergency pudding decree was mercifully abolished in 2011).
Even when infections from coronavirus are down to zero, some data-hungry governments could argue they needed to keep the biometric surveillance systems in place because they fear a second wave of coronavirus, or because there is a new Ebola strain evolving in central Africa, or because . . . you get the idea. A big battle has been raging in recent years over our privacy. The coronavirus crisis could be the battle’s tipping point. For when people are given a choice between privacy and health, they will usually choose health.
The soap police
Asking people to choose between privacy and health is, in fact, the very root of the problem. Because this is a false choice. We can and should enjoy both privacy and health. We can choose to protect our health and stop the coronavirus epidemic not by instituting totalitarian surveillance regimes, but rather by empowering citizens. In recent weeks, some of the most successful efforts to contain the coronavirus epidemic were orchestrated by South Korea, Taiwan and Singapore. While these countries have made some use of tracking applications, they have relied far more on extensive testing, on honest reporting, and on the willing co-operation of a well-informed public.
Centralised monitoring and harsh punishments aren’t the only way to make people comply with beneficial guidelines. When people are told the scientific facts, and when people trust public authorities to tell them these facts, citizens can do the right thing even without a Big Brother watching over their shoulders. A self-motivated and well-informed population is usually far more powerful and effective than a policed, ignorant population.
Consider, for example, washing your hands with soap. This has been one of the greatest advances ever in human hygiene. This simple action saves millions of lives every year. While we take it for granted, it was only in the 19th century that scientists discovered the importance of washing hands with soap. Previously, even doctors and nurses proceeded from one surgical operation to the next without washing their hands. Today billions of people daily wash their hands, not because they are afraid of the soap police, but rather because they understand the facts. I wash my hands with soap because I have heard of viruses and bacteria, I understand that these tiny organisms cause diseases, and I know that soap can remove them.
But to achieve such a level of compliance and co-operation, you need trust. People need to trust science, to trust public authorities, and to trust the media. Over the past few years, irresponsible politicians have deliberately undermined trust in science, in public authorities and in the media. Now these same irresponsible politicians might be tempted to take the high road to authoritarianism, arguing that you just cannot trust the public to do the right thing.
Normally, trust that has been eroded for years cannot be rebuilt overnight. But these are not normal times. In a moment of crisis, minds too can change quickly. You can have bitter arguments with your siblings for years, but when some emergency occurs, you suddenly discover a hidden reservoir of trust and amity, and you rush to help one another. Instead of building a surveillance regime, it is not too late to rebuild people’s trust in science, in public authorities and in the media. We should definitely make use of new technologies too, but these technologies should empower citizens. I am all in favour of monitoring my body temperature and blood pressure, but that data should not be used to create an all-powerful government. Rather, that data should enable me to make more informed personal choices, and also to hold government accountable for its decisions.
If I could track my own medical condition 24 hours a day, I would learn not only whether I have become a health hazard to other people, but also which habits contribute to my health. And if I could access and analyse reliable statistics on the spread of coronavirus, I would be able to judge whether the government is telling me the truth and whether it is adopting the right policies to combat the epidemic. Whenever people talk about surveillance, remember that the same surveillance technology can usually be used not only by governments to monitor individuals — but also by individuals to monitor governments.
The coronavirus epidemic is thus a major test of citizenship. In the days ahead, each one of us should choose to trust scientific data and healthcare experts over unfounded conspiracy theories and self-serving politicians. If we fail to make the right choice, we might find ourselves signing away our most precious freedoms, thinking that this is the only way to safeguard our health.
We need a global plan
The second important choice we confront is between nationalist isolation and global solidarity. Both the epidemic itself and the resulting economic crisis are global problems. They can be solved effectively only by global co-operation.
First and foremost, in order to defeat the virus we need to share information globally. That’s the big advantage of humans over viruses. A coronavirus in China and a coronavirus in the US cannot swap tips about how to infect humans. But China can teach the US many valuable lessons about coronavirus and how to deal with it. What an Italian doctor discovers in Milan in the early morning might well save lives in Tehran by evening. When the UK government hesitates between several policies, it can get advice from the Koreans who have already faced a similar dilemma a month ago. But for this to happen, we need a spirit of global co-operation and trust.
Countries should be willing to share information openly and humbly seek advice, and should be able to trust the data and the insights they receive. We also need a global effort to produce and distribute medical equipment, most notably testing kits and respiratory machines. Instead of every country trying to do it locally and hoarding whatever equipment it can get, a co-ordinated global effort could greatly accelerate production and make sure life-saving equipment is distributed more fairly. Just as countries nationalise key industries during a war, the human war against coronavirus may require us to “humanise” the crucial production lines. A rich country with few coronavirus cases should be willing to send precious equipment to a poorer country with many cases, trusting that if and when it subsequently needs help, other countries will come to its assistance.
We might consider a similar global effort to pool medical personnel. Countries currently less affected could send medical staff to the worst-hit regions of the world, both in order to help them in their hour of need, and in order to gain valuable experience. If later on the focus of the epidemic shifts, help could start flowing in the opposite direction.
Global co-operation is vitally needed on the economic front too. Given the global nature of the economy and of supply chains, if each government does its own thing in complete disregard of the others, the result will be chaos and a deepening crisis. We need a global plan of action, and we need it fast.
Another requirement is reaching a global agreement on travel. Suspending all international travel for months will cause tremendous hardships, and hamper the war against coronavirus. Countries need to co-operate in order to allow at least a trickle of essential travellers to continue crossing borders: scientists, doctors, journalists, politicians, businesspeople. This can be done by reaching a global agreement on the pre-screening of travellers by their home country. If you know that only carefully screened travellers were allowed on a plane, you would be more willing to accept them into your country.
Unfortunately, at present countries hardly do any of these things. A collective paralysis has gripped the international community. There seem to be no adults in the room. One would have expected to see already weeks ago an emergency meeting of global leaders to come up with a common plan of action. The G7 leaders managed to organise a videoconference only this week, and it did not result in any such plan.
In previous global crises — such as the 2008 financial crisis and the 2014 Ebola epidemic — the US assumed the role of global leader. But the current US administration has abdicated the job of leader. It has made it very clear that it cares about the greatness of America far more than about the future of humanity.
This administration has abandoned even its closest allies. When it banned all travel from the EU, it didn’t bother to give the EU so much as an advance notice — let alone consult with the EU about that drastic measure. It has scandalised Germany by allegedly offering $1bn to a German pharmaceutical company to buy monopoly rights to a new Covid-19 vaccine. Even if the current administration eventually changes tack and comes up with a global plan of action, few would follow a leader who never takes responsibility, who never admits mistakes, and who routinely takes all the credit for himself while leaving all the blame to others.
If the void left by the US isn’t filled by other countries, not only will it be much harder to stop the current epidemic, but its legacy will continue to poison international relations for years to come. Yet every crisis is also an opportunity. We must hope that the current epidemic will help humankind realise the acute danger posed by global disunity.
Humanity needs to make a choice. Will we travel down the route of disunity, or will we adopt the path of global solidarity? If we choose disunity, this will not only prolong the crisis, but will probably result in even worse catastrophes in the future. If we choose global solidarity, it will be a victory not only against the coronavirus, but against all future epidemics and crises that might assail humankind in the 21st century.
Yuval Noah Harari is author of ‘Sapiens’, ‘Homo Deus’ and ‘21 Lessons for the 21st Century’
Follow @FTLifeArts on Twitter to find out about our latest stories first. Listen to our culture podcast, Culture Call, where editors Gris and Lilah dig into the trends shaping life in the 2020s, interview the people breaking new ground and bring you behind the scenes of FT Life & Arts journalism. Subscribe on Apple, Spotify, or wherever you listen.
It should be no surprise that I’m obsessed with science fiction. Considering that I’m both a graphic designer and work in cryptocurrency, it’s practically required that I pay homage to the neon-soaked aesthetics of Blade Runner 2049, have a secret crush on Ava from Ex Machina, and geek out over pretty much anything Neal Stephenson puts out.
However, with a once theoretical dystopia now apparently on our doorstep, we should be considering the trajectory of our civilization now more than ever. Suddenly, the megacorps, oppressive regimes, and looming global crises don’t seem so distant anymore.
What were once just tropes in our favorite works of science fiction are now becoming realities that are impacting our daily lives.
And here we are, wrestling with the implications of our new reality while trapped in our living rooms staring into glowing rectangles straight out of Ready Player One.
Still from “The Music Scene” by Blockhead
Recent events surrounding COVID-19 have put us at a bit of a crossroad. We have an opportunity in front of us now to continue down this path, or use this crisis as a wake up call to pivot our future toward a world that is more equitable, safe, and empowering for all. We are the heroes of our own journey right now.
Our worldview and idea of what is possible is largely shaped by the media we consume. You are what you eat after all. And while the news might inform us, it’s our fiction that inspires us to imagine what is possible.
Science fiction has always asked the big questions, while simultaneously preparing us for what may be around the corner.
Where are we heading?
What problems might we create for ourselves?
And wait…weren’t we promised flying cars?
Through captivating characters, suspenseful plots, and philosophical musings woven throughout, we use fiction above all else to tell great stories and entertain. But there is another purpose, which is to inspire the next generation about what the human mind is capable of and to shape our future for generations to come.
How many engineers got their start after seeing Star Wars? How many interface designers were inspired by Minority Report? Famously, Steve Jobs was inspired to create the iPad after first seeing a concept in 2001: A Space Odyssey.
The world needs this vision more than ever. And while I love the dystopian vibes of cyberpunk aesthetics as much as anyone, is there another world we can create that inspires us (and the next generation) to manifest a more sustainable, equitable, and free future for all?
I’ve recently come across a lesser known genre of science fiction called “solarpunk.” Like cyberpunk, it is a genre of speculative fiction wrapped in a signature aesthetic that paints a vision of the future we could create. The following definition from this reference guide summarizes it well:
Solarpunk is a movement in speculative fiction, art, fashion and activism that seeks to answer and embody the question “what does a sustainable civilization look like, and how can we get there?” The aesthetics of solarpunk merge the practical with the beautiful, the well-designed with the green and wild, the bright and colorful with the earthy and solid. Solarpunk can be utopian, just optimistic, or concerned with the struggles en route to a better world — but never dystopian. As our world roils with calamity, we need solutions, not warnings. Solutions to live comfortably without fossil fuels, to equitably manage scarcity and share abundance, to be kinder to each other and to the planet we share. At once a vision of the future, a thoughtful provocation, and an achievable lifestyle.
Apart from the clear aesthetic differences, a key difference here between solarpunk and cyberpunk is the emphasis on solutions, not warnings.
It appears that solarpunk is not interested in exploring potential paths that may go wrong. Rather, it assumes that the problems are already here and focuses most of its energy on solutions and a path forward. The warnings of cyberpunk tap into the fear of what might happen, and uses that as a premise for creating plot tension. Solarpunk encourages us to accept the reality of the present and move forward by focusing on solutions to the problems at hand.
There are also some clear differentiators on how society is structured and depicted in the two genres.
️Cyberpunk:
Economy dominated by large corporations
Environment is usually wrecked, oppressive
Powerful technology has created wealth gap
Drugs used as escape from reality
Man merging with machine
Always raining
SolarPunk:
Decentralized symbiotic economic structures
Living in balance with environment
Technology empowers the individual
Drugs used to expand consciousness and augment reality
Man working alongside machine
Sunny with a chance of showers
A big difference here is how humanity chooses to harness the technology we create. Do we use it to evolve ourselves past our current biological form and catapult us toward merging with machines or do we show thoughtful restraint and use technology to bring us more in balance with our own biology and ecosystem?
This is the question for the ages, and yet I don’t think the answer has to be so black and white. In many ways, creating and using technology is the most natural thing that we can do as a species. A beaver gathering sticks to build a dam is no different than a person using an ax to build a roof over their head. The clean lines of an iPhone seem to contrast the squiggly lines of the raw materials it’s made of, but at the end of the day it’s all a byproduct of an exploding supernova.
“We are made of star stuff” — Carl Sagan
Technology does not need to be viewed as an alien phenomenon separating us from nature, but rather as an emergent phenomenon and inevitable byproduct of all natural systems.
Solarpunk ideas remind us that there is a path forward in which we can have our cake and eat it too. We can embrace the exponential rise of our understanding and control over the universe while using that knowledge to ensure that we do not destroy our environment, society and ourselves in the process.
Now I know what you might be thinking, because I am right there with you.
Is this too good to be true? Maybe.
Is reality likely to play out this peacefully? Unlikely.
Should that stop us from trying? No.
It’s called speculative fiction for a reason. It’s not productive to pretend that things will magically fall into place if we put out the right vibes into the universe. We need calculated progress, backing from the hard sciences, and an understanding that compromises and tradeoffs will always have to be made.
The goal of solarpunk is not to wish for a better future, but rather to propagate a series of values, approaches, and awarenesses into our collective psychology that allow us to continue pushing forward with our progress, without sacrificing our own humanity and connection to the natural world in that pursuit.
It is a well known concept that our expectations for the future are guided largely by our predictions of what it will look like. You don’t have to be stoned in a dorm room to think “Dude… the future only looks like the future because that’s what we say the future looks like.”
And yet our visions aren’t always correct. We constantly overestimate what can be done in one year and underestimate what can be done in 10 years. It is clear in drawings from the Victorian era that our predictions for the future are often misguided by our present moment.
Will our vision of tomorrow look this outdated in 10 years?
When we say something looks futuristic, we are largely comparing that to other artifacts of our present, concept art, and this year’s latest blockbuster. It therefore puts a lot of pressure on the creators shaping our fictional worlds, for they are the first to the front lines in a war of ideas competing to define what the future of our world could and should look like.
Most of our stories about the future are largely dystopian. I understand how important the backdrop of an oppressive regime can be in creating an antagonist you love to hate, or how an experiment gone wrong can set up a hero’s redemption and a captivating plot arc, but I still find myself yearning for a different take on what our future could look like. Are we so sure that our path leads to dystopia that we can’t even explore alternative options, even in our imaginations?
I’m not trying to tell people what they should or should not create. In fact, I believe that our freedom to do so is a liberty that should be fought for at all cost. What I am asking, however, is why we as humans have a tendency to explore only the darkest visions of our future in the stories we tell ourselves? As fun as it is to dream up a techno dystopian future, I’d bet that most of us probably prefer not to live in a world that is oppressed, dangerous, and for some reason always raining.
I believe that, if we can manifest more visions of the future based not in what we are afraid of, but in what we are hopeful for, we’ll be surprised with what we accomplish and who we can inspire.
Você precisa fazer login para comentar.