Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?
February 26, 2020
Professor of atmospheric science, University of California, Berkeley
Prediction for 2030: We’ll light up the world… safely
I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”
I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.
Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.
I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.
Anne Lise Kjaer
Futurist, Kjaer Global, London
Prediction for 2030: Adults will learn to grasp new ideas
As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.
When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.
A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.
So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?
What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?
Coauthor of Superforecasting and professor, University of Pennsylvania
Prediction for 2030: We’ll get better at being uncertain
At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstraction—but from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.
Forecasts have the potential to be either self-fulfilling or self-negating—Y2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?
What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.
Associate professor of economics, UCLA
Prediction for 2030: We’ll be more—and less—private
When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.
These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?
I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.
Science fiction and nonfiction author, San Francisco
Prediction for 2030: We’re going to see a lot more humble technology
Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.
Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.
There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.
It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.
I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.
We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.
Associate professor of computer science, Harvard
Prediction for 2030: Humans and machines will make decisions together
In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.
Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.
There’s been about a decade of work trying to get unsupervised machine-learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.
We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.
I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.
All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.
Abdoulaye Banire Diallo
Professor, director of the bioinformatics lab, University of Quebec at Montreal
Prediction for 2030: Machine-based forecasting will be regulated
When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecasting—to help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.
With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.
Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.
Contrary to hopes for a tidy conclusion to the COVID-19 pandemic, history shows that outbreaks of infectious disease often have much murkier outcomes—including simply being forgotten about, or dismissed as someone else’s problem.
Recent history tells us a lot about how epidemics unfold, how outbreaks spread, and how they are controlled. We also know a good deal about beginnings—those first cases of pneumonia in Guangdong marking the SARS outbreak of 2002–3, the earliest instances of influenza in Veracruz leading to the H1N1 influenza pandemic of 2009–10, the outbreak of hemorrhagic fever in Guinea sparking the Ebola pandemic of 2014–16. But these stories of rising action and a dramatic denouement only get us so far in coming to terms with the global crisis of COVID-19. The coronavirus pandemic has blown past many efforts at containment, snapped the reins of case detection and surveillance across the world, and saturated all inhabited continents. To understand possible endings for this epidemic, we must look elsewhere than the neat pattern of beginning and end—and reconsider what we mean by the talk of “ending” epidemics to begin with.
The social lives of epidemics show them to be not just natural phenomena but also narrative ones: deeply shaped by the stories we tell about their beginnings, their middles, their ends.
Historians have long been fascinated by epidemics in part because, even where they differ in details, they exhibit a typical pattern of social choreography recognizable across vast reaches of time and space. Even though the biological agents of the sixth-century Plague of Justinian, the fourteenth-century Black Death, and the early twentieth-century Manchurian Plague were almost certainly not identical, the epidemics themselves share common features that link historical actors to present experience. “As a social phenomenon,” the historian Charles Rosenberg has argued, “an epidemic has a dramaturgic form. Epidemics start at a moment in time, proceed on a stage limited in space and duration, following a plot line of increasing and revelatory tension, move to a crisis of individual and collective character, then drift towards closure.” And yet not all diseases fit so neatly into this typological structure. Rosenberg wrote these words in 1992, nearly a decade into the North American HIV/AIDS epidemic. His words rang true about the origins of that disease—thanks in part to the relentless, overzealous pursuit of its “Patient Zero”—but not so much about its end, which was, as for COVID-19, nowhere in sight.
In the case of the new coronavirus, we have now seen an initial fixation on origins give way to the question of endings. In March The Atlantic offered four possible “timelines for life returning to normal,” all of which depended the biological basis of a sufficient amount of the population developing immunity (perhaps 60 to 80 percent) to curb further spread. This confident assertion derived from models of infectious outbreaks formalized by epidemiologists such as W. H. Frost a century earlier. If the world can be defined into those susceptible (S), infected (I) and resistant (R) to a disease, and a pathogen has a reproductive number R0 (pronounced R-naught) describing how many susceptible people can be infected by a single infected person, the end of the epidemic begins when the proportion of susceptible people drops below the reciprocal, 1/R0. When that happens, one person would infect, on average, less than one other person with the disease.
These formulas reassure us, perhaps deceptively. They conjure up a set of natural laws that give order to the cadence of calamities. The curves produced by models, which in better times belonged to the arcana of epidemiologists, are now common figures in the lives of billions of people learning to live with contractions of civil society promoted in the name of “bending,” “flattening,” or “squashing” them. At the same time, as David Jones and Stefan Helmreich recently wrote in these pages, the smooth lines of these curves are far removed from jagged realities of the day-to-day experience of an epidemic—including the sharp spikes in those “reopening” states where modelers had predicted continued decline.
In other words, epidemics are not merely biological phenomena. They are inevitably framed and shaped by our social responses to them, from beginning to end (whatever that may mean in any particular case). The questions now being asked of scientists, clinicians, mayors, governors, prime ministers, and presidents around the world is not merely “When will the biological phenomenon of this epidemic resolve?” but rather “When, if ever, will the disruption to our social life caused in the name of coronavirus come to an end?” As peak incidence nears, and in many places appears to have passed, elected officials and think tanks from opposite ends of the political spectrum provide “roadmaps” and “frameworks” for how an epidemic that has shut down economic, civic, and social life in a manner not seen globally in at least a century might eventually recede and allow resumption of a “new normal.”
To understand possible endings for this epidemic, we must look elsewhere than the neat pattern of beginning and end—and reconsider what we mean by the talk of “ending” epidemics to begin with.
These two faces of an epidemic, the biological and the social, are closely intertwined, but they are not the same. The biological epidemic can shut down daily life by sickening and killing people, but the social epidemic also shuts down daily life by overturning basic premises of sociality, economics, governance, discourse, interaction—and killing people in the process as well. There is a risk, as we know from both the Spanish influenza of 1918–19 and the more recent swine flu of 2008–9, of relaxing social responses before the biological threat has passed. But there is also a risk in misjudging a biological threat based on faulty models or bad data and in disrupting social life in such a way that the restrictions can never properly be taken back. We have seen in the case of coronavirus the two faces of the epidemic escalating on local, national, and global levels in tandem, but the biological epidemic and the social epidemic don’t necessarily recede on the same timeline.
For these sorts of reasons we must step back and reflect in detail on what we mean by ending in the first place. The history of epidemic endings has taken many forms, and only a handful of them have resulted in the elimination of a disease.
History reminds us that the interconnections between the timing of the biological and social epidemics are far from obvious. In some cases, like the yellow fever epidemics of the eighteenth century and the cholera epidemics of the nineteenth century, the dramatic symptomatology of the disease itself can make its timing easy to track. Like a bag of popcorn popping in the microwave, the tempo of visible case-events begins slowly, escalates to a frenetic peak, and then recedes, leaving a diminishing frequency of new cases that eventually are spaced far enough apart to be contained and then eliminated. In other examples, however, like the polio epidemics of the twentieth century, the disease process itself is hidden, often mild in presentation, threatens to come back, and ends not on a single day but over different timescales and in different ways for different people.
Campaigns against infectious diseases are often discussed in military terms, and one result of that metaphor is to suggest that epidemics too must have a singular endpoint. We approach the infection peak as if it were a decisive battle like Waterloo, or a diplomatic arrangement like the Armistice at Compiègne in November 1918. Yet the chronology of a single, decisive ending is not always true even for military history, of course. Just as the clear ending of a military war does not necessarily bring a close to the experience of war in everyday life, so too the resolution of the biological epidemic does not immediately undo the effects of the social epidemic. The social and economic effects of the 1918–1919 pandemic, for example, were felt long after the end of the third and putatively final wave of the virus. While the immediate economic effect on many local businesses caused by shutdowns appears to have resolved in a matter of months, the broader economic effects of the epidemic on labor-wage relations were still visible in economic surveys in 1920, again in 1921, and in several areas as far as 1930.
The history of epidemic endings has taken many forms, and only a handful of them have resulted in the elimination of a disease.
And yet, like World War One with which its history was so closely intertwined, the influenza pandemic of 1918–19 appeared at first to have a singular ending. In individual cities the epidemic often produced dramatic spikes and falls in equally rapid tempo. In Philadelphia, as John Barry notes in The Great Influenza (2004), after an explosive and deadly rise in October 1919 that peaked at 4,597 deaths in a single week, cases suddenly dropped so precipitously that the public gathering ban could be lifted before the month was over, with almost no new cases in following weeks. A phenomenon whose destructive potential was limited by material laws, “the virus burned through available fuel, then it quickly faded away.”
As Barry reminds us, however, scholars have since learned to differentiate at least three different sequences of epidemics within the broader pandemic. The first wave blazed through military installations in the spring of 1918, the second wave caused the devastating mortality spikes in the summer and fall of 1918, and the third wave began in December 1918 and lingered long through the summer of 1919. Some cities, like San Francisco, passed through the first and second waves relatively unscathed only to be devastated by the third wave. Nor was it clear to those still alive in 1919 that the pandemic was over after the third wave receded. Even as late as 1922, a bad flu season in Washington State merited a response from public health officials to enforce absolute quarantine as they had during 1918–19. It is difficult, looking back, to say exactly when this prototypical pandemic of the twentieth century was really over.
Who can tell when a pandemic has ended? Today, strictly speaking, only the World Health Organization (WHO). The Emergency Committee of the WHO is responsible for the global governance of health and international coordination of epidemic response. After the SARS coronavirus pandemic of 2002–3, this body was granted sole power to declare the beginnings and endings of Public Health Emergencies of International Concern (PHEIC). While SARS morbidity and mortality—roughly 8,000 cases and 800 deaths in 26 countries—has been dwarfed by the sheer scale of COVID-19, the pandemic’s effect on national and global economies prompted revisions to the International Health Regulations in 2005, a body of international law that had remained unchanged since 1969. This revision broadened the scope of coordinated global response from a handful of diseases to any public health event that the WHO deemed to be of international concern and shifted from a reactive response framework to a pro-active one based on real-time surveillance and detection and containment at the source rather than merely action at international borders.
This social infrastructure has important consequences, not all of them necessarily positive. Any time the WHO declares a public health event of international concern—and frequently when it chooses not to declare one—the event becomes a matter of front-page news. Since the 2005 revision, the group has been criticized both for declaring a PHEIC too hastily (as in the case of H1N1) or too late (in the case of Ebola). The WHO’s decision to declare the end of a PHEIC, by contrast, is rarely subject to the same public scrutiny. When an outbreak is no longer classified as an “extraordinary event” and no longer is seen to pose a risk at international spread, the PHEIC is considered not to be justified, leading to a withdrawal of international coordination. Once countries can grapple with the disease within their own borders, under their own national frameworks, the PHEIC is quietly de-escalated.
At their worst, epidemic endings are a form of collective amnesia, transmuting the disease that remains into merely someone else’s problem.
As the response to the 2014–16 Ebola outbreak in West Africa demonstrates, however, the act of declaring the end of a pandemic can be just as powerful as the act of declaring its beginning—in part because emergency situations can continue even after a return to “normal” has been declared. When WHO Director General Margaret Chan announced in March 2016 that the Ebola outbreak was no longer a public health event of international concern, international donors withdrew funds and care to the West African countries devastated by the outbreak, even as these struggling health systems continued to be stretched beyond their means by the needs of Ebola survivors. NGOs and virologists expressed concern that efforts to fund Ebola vaccine development would likewise fade without a sense of global urgency pushing research forward.
Part of the reason that the role of the WHO in proclaiming and terminating the state of pandemic is subject to so much scrutiny is that it can be. The WHO is the only global health body that is accountable to all governments of the world; its parliamentary World Health Assembly contains health ministers from every nation. Its authority rests not so much on its battered budget as its access to epidemic intelligence and pool of select individuals, technical experts with vast experience in epidemic response. But even though internationally sourced scientific and public health authority is key to its role in pandemic crises, WHO guidance is ultimately carried out in very different ways and on very different time scales in different countries, provinces, states, counties, and cities. One state might begin to ease up restrictions to movement and industry just as another implements more and more stringent measures. If each country’s experience of “lockdown” has already been heterogeneous, the reconnection between them after the PHEIC is ended will likely show even more variance.
So many of our hopes for the termination of the present PHEIC now lie in the promise of a COVID-19 vaccine. Yet a closer look at one of the central vaccine success stories of the twentieth century shows that technological solutions rarely offer resolution to pandemics on their own. Contrary to our expectations, vaccines are not universal technologies. They are always deployed locally, with variable resources and commitments to scientific expertise. International variations in research, development, and dissemination of effective vaccines are especially relevant in the global fight against epidemic polio.
The development of the polio vaccine is relatively well known, usually told as a story of an American tragedy and triumph. Yet while polio epidemics that swept the globe in the postwar decades did not respect national borders or the Iron Curtain, the Cold War provided context for both collaboration and antagonism. Only a few years after the licensing of Jonas Salk’s inactivated vaccine in the United States, his technique became widely used across the world, although its efficacy outside of the United States was questioned. The second, live oral vaccine developed by Albert Sabin, however, involved extensive collaboration in with Eastern European and Soviet colleagues. As the success of the Soviet polio vaccine trials marked a rare landmark of Cold War cooperation, Basil O’Connor, president of the March of Dimes movement, speaking at the Fifth International Poliomyelitis Conference in 1960, proclaimed that “in search for the truth that frees man from disease, there is no cold war.”
Two faces of an epidemic, the biological and the social, are closely intertwined, but they are not the same.
Yet the differential uptake of this vaccine retraced the divisions of Cold War geography. The Soviet Union, Hungary, and Czechoslovakia were the first countries in the world to begin nationwide immunization with the Sabin vaccine, soon followed by Cuba, the first country in the Western Hemisphere to eliminate the disease. By the time the Sabin vaccine was licensed in the United States in 1963, much of Eastern Europe had done away with epidemics and was largely polio-free. The successful ending of this epidemic within the communist world was immediately held up as proof of the superiority of their political system.
Western experts who trusted the Soviet vaccine trials, including the Yale virologist and WHO envoy Dorothy Horstmann, nonetheless emphasized that their results were possible because of the military-like organization of the Soviet health care system. Yet these enduring concerns that authoritarianism itself was the key tool for ending epidemics—a concern reflected in current debates over China’s heavy-handed interventions in Wuhan this year—can also be overstated. The Cold War East was united not only by authoritarianism and heavy hierarchies in state organization and society, but also by a powerful shared belief in the integration of paternal state, biomedical research, and socialized medicine. Epidemic management in these countries combined an emphasis on prevention, easily mobilized health workers, top-down organization of vaccinations, and a rhetoric of solidarity, all resting on a health care system that aimed at access to all citizens.
Still, authoritarianism as a catalyst for controlling epidemics can be singled out and pursued with long-lasting consequences. Epidemics can be harbingers of significant political changes that go well beyond their ending, significantly reshaping a new “normal” after the threat passes. Many Hungarians, for example, have watched with alarm the complete sidelining of parliament and the introduction of government by decree at the end of March this year. The end of any epidemic crisis, and thus the end of the need for the significantly increased power of Viktor Orbán, would be determined by Orbán himself. Likewise, many other states, urging the mobilization of new technologies as a solution to end epidemics, are opening the door to heightened state surveillance of their citizens. The apps and trackers now being designed to follow the movement and exposure of people in order to enable the end of epidemic lockdowns can collect data and establish mechanisms that reach well beyond the original intent. The digital afterlives of these practices raise new and unprecedented questions about when and how epidemics end.
Like infectious agents on an agar plate, epidemics colonize our social lives and force us to learn to live with them, in some way or another, for the foreseeable future.
Although we want to believe that a single technological breakthrough will end the present crisis, the application of any global health technology is always locally determined. After its dramatic successes in managing polio epidemics in the late 1950s and early 1960s, the oral poliovirus vaccine became the tool of choice for the Global Polio Eradication Initiative in the late 1980s, as it promised an end to “summer fears” globally. But since vaccines are in part technologies of trust, ending polio outbreaks depends on maintaining confidence in national and international structures through which vaccines are delivered. Wherever that often fragile trust is fractured or undermined, vaccination rates can drop to a critical level, giving way to vaccine-derived polio, which thrives in partially vaccinated populations.
In Kano, Nigeria, for example, a ban on polio vaccination between 2000 and 2004 resulted in a new national polio epidemic that soon spread to neighboring countries. As late as December 2019 polio outbreaks were still reported in fifteen African countries, including Angola and the Democratic Republic of the Congo. Nor is it clear that polio can fully be regarded as an epidemic at this point: while polio epidemics are now a thing of the past for Hungary—and the rest of Europe, the Americas, Australia, and East Asia as well—the disease is still endemic to parts of Africa and South Asia. A disease once universally epidemic is now locally endemic: this, too, is another way that epidemics end.
Indeed, many epidemics have only “ended” through widespread acceptance of a newly endemic state. Consider the global threat of HIV/AIDS. From a strictly biological perspective, the AIDS epidemic has never ended; the virus continues to spread devastation through the world, infecting 1.7 million people and claiming an estimated 770,000 lives in the year 2018 alone. But HIV is not generally described these days with the same urgency and fear that accompanied the newly defined AIDS epidemic in the early 1980s. Like coronavirus today, AIDS at that time was a rapidly spreading and unknown emerging threat, splayed across newspaper headlines and magazine covers, claiming the lives of celebrities and ordinary citizens alike. Nearly forty years later it has largely become a chronic disease endemic, at least in the Global North. Like diabetes, which claimed an estimated 4.9 million lives in 2019, HIV/AIDS became a manageable condition—if one had access to the right medications.
Those who are no longer directly threatened by the impact of the disease have a hard time continuing to attend to the urgency of an epidemic that has been rolling on for nearly four decades. Even in the first decade of the AIDS epidemic, activists in the United States fought tooth and nail to make their suffering visible in the face of both the Reagan administration’s dogged refusal to talk publicly about the AIDS crisis and the indifference of the press after the initial sensation of the newly discovered virus had become common knowledge. In this respect, the social epidemic does not necessarily end when biological transmission has ended, or even peaked, but rather when, in the attention of the general public and in the judgment of certain media and political elites who shape that attention, the disease ceases to be newsworthy.
Though we like to think of science as universal and objective, crossing borders and transcending differences, it is in fact deeply contingent upon local practices.
Polio, for its part, has not been newsworthy for a while, even as thousands around the world still live with polio with ever-decreasing access to care and support. Soon after the immediate threat of outbreaks passed, so did support for those whose lives were still bound up with the disease. For others, it became simply a background fact of life—something that happens elsewhere. The polio problem was “solved,” specialized hospitals were closed, fundraising organizations found new causes, and poster children found themselves in an increasingly challenging world. Few medical professionals are trained today in the treatment of the disease. As intimate knowledge of polio and its treatment withered away with time, people living with polio became embodied repositories of lost knowledge.
History tells us public attention is much more easily drawn to new diseases as they emerge rather than sustained over the long haul. Well before AIDS shocked the world into recognizing the devastating potential of novel epidemic diseases, a series of earlier outbreaks had already signaled the presence of emerging infectious agents. When hundreds of members of the American Legion fell ill after their annual meeting in Philadelphia in 1976, the efforts of epidemiologists from the Centers for Disease Control to explain the spread of this mysterious disease and its newly discovered bacterial agent, Legionella, occupied front-page headlines. In the years since, however, as the 1976 incident faded from memory, Legionella infections have become everyday objects of medical care, even though incidence in the U.S. has grown ninefold since 2000, tracing a line of exponential growth that looks a lot like COVID-19’s on a longer time scale. Yet few among us pause in our daily lives to consider whether we are living through the slowly ascending limb of a Legionella epidemic.
Nor do most people living in the United States stop to consider the ravages of tuberculosis as a pandemic, even though an estimated 10 million new cases of tuberculosis were reported around the globe in 2018, and an estimated 1.5 million people died from the disease. The disease seems to only receive attention in relation to newer scourges: in the late twentieth century TB coinfection became a leading cause of death in emerging HIV/AIDS pandemic, while in the past few months TB coinfection has been invoked as a rising cause of mortality in COVID-19 pandemic. Amidst these stories it is easy to miss that on its own, tuberculosis has been and continues to be the leading cause of death worldwide from a single infectious agent. And even though tuberculosis is not an active concern of middle-class Americans, it is still not a thing of the past even in this country. More than 9,000 cases of tuberculosis were reported in the United States in 2018—overwhelmingly affecting racial and ethnic minority populations—but they rarely made the news.
There will be no simple return to the way things were: whatever normal we build will be a new one—whether many of us realize it or not.
While tuberculosis is the target of concerted international disease control efforts, and occasionally eradication efforts, the time course of this affliction has been spread out so long—and so clearly demarcated in space as a problem of “other places”—that it is no longer part of the epidemic imagination of the Global North. And yet history tells a very different story. DNA lineage studies of tuberculosis now show that the spread of tuberculosis in sub-Saharan Africa and Latin America was initiated by European contact and conquest from the fifteenth century through the nineteenth. In the early decades of the twentieth century, tuberculosis epidemics accelerated throughout sub-Saharan Africa, South Asia, and Southeast Asia due to the rapid urbanization and industrialization of European colonies. Although the wave of decolonizations that swept these regions between the 1940s and the 1980s established autonomy and sovereignty for newly post-colonial nations, this movement did not send tuberculosis back to Europe.
These features of the social lives of epidemics—how they live on even when they seem, to some, to have disappeared—show them to be not just natural phenomena but also narrative ones: deeply shaped by the stories we tell about their beginnings, their middles, their ends. At their best, epidemic endings are a form of relief for the mainstream “we” that can pick up the pieces and reconstitute a normal life. At their worst, epidemic endings are a form of collective amnesia, transmuting the disease that remains into merely someone else’s problem.
What are we to conclude from these complex interactions between the social and the biological faces of epidemics, past and present? Like infectious agents on an agar plate, epidemics colonize our social lives and force us to learn to live with them, in some way or another, for the foreseeable future. Just as the postcolonial period continued to be shaped by structures established under colonial rule, so too are our post-pandemic futures indelibly shaped by what we do now. There will be no simple return to the way things were: whatever normal we build will be a new one—whether many of us realize it or not. Like the world of scientific facts after the end of a critical experiment, the world that we find after an the end of an epidemic crisis—whatever we take that to be—looks in many ways like the world that came before, but with new social truths established. How exactly these norms come into being depends a great deal on particular circumstances: current interactions among people, the instruments of social policy as well as medical and public health intervention with which we apply our efforts, and the underlying response of the material which we applied that apparatus against (in this case, the coronavirus strain SARS-CoV-2). While we cannot know now how the present epidemic will end, we can be confident that it in its wake it will leave different conceptions of normal in realms biological and social, national and international, economic and political.
Though we like to think of science as universal and objective, crossing borders and transcending differences, it is in fact deeply contingent upon local practices—including norms that are easily thrown over in an emergency, and established conventions that do not always hold up in situations of urgency. Today we see civic leaders jumping the gun in speaking of access to treatments, antibody screens, and vaccines well in advance of any scientific evidence, while relatively straightforward attempts to estimate the true number of people affected by the disease spark firestorms over the credibility of medical knowledge. Arduous work is often required to achieve scientific consensus, and when the stakes are high—especially when huge numbers of lives are at risk—heterogeneous data give way to highly variable interpretations. As data moves too quickly in some domains and too slowly in others, and sped-up time pressures are placed on all investigations the projected curve of the epidemic is transformed into an elaborate guessing game, in which different states rely on different kinds of scientific claims to sketch out wildly different timetables for ending social restrictions.
The falling action of an epidemic is perhaps best thought of as asymptotic: never disappearing, but rather fading to the point where signal is lost in the noise of the new normal—and even allowed to be forgotten.
These varied endings of the epidemic across local and national settings will only be valid insofar as they are acknowledged as such by others—especially if any reopening of trade and travel is to be achieved. In this sense, the process of establishing a new normal in global commerce will continue to be bound up in practices of international consensus. What the new normal in global health governance will look like, however, is more uncertain than ever. Long accustomed to the role of international scapegoat, the WHO Secretariat seems doomed to be accused either of working beyond its mandate or not acting fast enough. Moreover, it can easily become a target of scapegoating, as the secessionist posturing of Donald Trump demonstrates. Yet the U.S. president’s recent withdrawal from this international body is neither unprecedented nor unsurmountable. Although Trump’s voting base might not wish to be grouped together with the only other global power to secede from the WHO, after the Soviet Union’s 1949 departure from the group it ultimately brought all Eastern Bloc back to task of international health leadership in 1956. Much as the return of the Soviets to the WHO resulted in the global eradication of smallpox—the only human disease so far to have been intentionally eradicated—it is possible that some future return of the United States to the project of global health governance might also result in a more hopeful post-pandemic future.
As the historians at the University of Oslo have recently noted, in epidemic periods “the present moves faster, the past seems further removed, and the future seems completely unpredictable.” How, then, are we to know when epidemics end? How does the act of looking back aid us in determining a way forward? Historians make poor futurologists, but we spend a lot of time thinking about time. And epidemics produce their own kinds of time, in both biological and social domains, disrupting our individual senses of passing days as well as conventions for collective behavior. They carry within them their own tempos and rhythms: the slow initial growth, the explosive upward limb of the outbreak, the slowing of transmission that marks the peak, plateau, and the downward limb. This falling action is perhaps best thought of as asymptotic: rarely disappearing, but rather fading to the point where signal is lost in the noise of the new normal—and even allowed to be forgotten.
This storm will pass. But the choices we make now could change our lives for years to come.
Yuval Noah Harari – March 20, 2020
Humankind is now facing a global crisis. Perhaps the biggest crisis of our generation. The decisions people and governments take in the next few weeks will probably shape the world for years to come. They will shape not just our healthcare systems but also our economy, politics and culture. We must act quickly and decisively. We should also take into account the long-term consequences of our actions. When choosing between alternatives, we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes. Yes, the storm will pass, humankind will survive, most of us will still be alive — but we will inhabit a different world.
Many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes. Decisions that in normal times could take years of deliberation are passed in a matter of hours. Immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger. Entire countries serve as guinea-pigs in large-scale social experiments. What happens when everybody works from home and communicates only at a distance? What happens when entire schools and universities go online? In normal times, governments, businesses and educational boards would never agree to conduct such experiments. But these aren’t normal times.
In this time of crisis, we face two particularly important choices. The first is between totalitarian surveillance and citizen empowerment. The second is between nationalist isolation and global solidarity.
In order to stop the epidemic, entire populations need to comply with certain guidelines. There are two main ways of achieving this. One method is for the government to monitor people, and punish those who break the rules. Today, for the first time in human history, technology makes it possible to monitor everyone all the time. Fifty years ago, the KGB couldn’t follow 240m Soviet citizens 24 hours a day, nor could the KGB hope to effectively process all the information gathered. The KGB relied on human agents and analysts, and it just couldn’t place a human agent to follow every citizen. But now governments can rely on ubiquitous sensors and powerful algorithms instead of flesh-and-blood spooks.
In their battle against the coronavirus epidemic several governments have already deployed the new surveillance tools. The most notable case is China. By closely monitoring people’s smartphones, making use of hundreds of millions of face-recognising cameras, and obliging people to check and report their body temperature and medical condition, the Chinese authorities can not only quickly identify suspected coronavirus carriers, but also track their movements and identify anyone they came into contact with. A range of mobile apps warn citizens about their proximity to infected patients.
About the photography
The images accompanying this article are taken from webcams overlooking the deserted streets of Italy, found and manipulated by Graziano Panfili, a photographer living under lockdown
This kind of technology is not limited to east Asia. Prime Minister Benjamin Netanyahu of Israel recently authorised the Israel Security Agency to deploy surveillance technology normally reserved for battling terrorists to track coronavirus patients. When the relevant parliamentary subcommittee refused to authorise the measure, Netanyahu rammed it through with an “emergency decree”.
You might argue that there is nothing new about all this. In recent years both governments and corporations have been using ever more sophisticated technologies to track, monitor and manipulate people. Yet if we are not careful, the epidemic might nevertheless mark an important watershed in the history of surveillance. Not only because it might normalise the deployment of mass surveillance tools in countries that have so far rejected them, but even more so because it signifies a dramatic transition from “over the skin” to “under the skin” surveillance.
Hitherto, when your finger touched the screen of your smartphone and clicked on a link, the government wanted to know what exactly your finger was clicking on. But with coronavirus, the focus of interest shifts. Now the government wants to know the temperature of your finger and the blood-pressure under its skin.
The emergency pudding
One of the problems we face in working out where we stand on surveillance is that none of us know exactly how we are being surveilled, and what the coming years might bring. Surveillance technology is developing at breakneck speed, and what seemed science-fiction 10 years ago is today old news. As a thought experiment, consider a hypothetical government that demands that every citizen wears a biometric bracelet that monitors body temperature and heart-rate 24 hours a day. The resulting data is hoarded and analysed by government algorithms. The algorithms will know that you are sick even before you know it, and they will also know where you have been, and who you have met. The chains of infection could be drastically shortened, and even cut altogether. Such a system could arguably stop the epidemic in its tracks within days. Sounds wonderful, right?
The downside is, of course, that this would give legitimacy to a terrifying new surveillance system. If you know, for example, that I clicked on a Fox News link rather than a CNN link, that can teach you something about my political views and perhaps even my personality. But if you can monitor what happens to my body temperature, blood pressure and heart-rate as I watch the video clip, you can learn what makes me laugh, what makes me cry, and what makes me really, really angry.
It is crucial to remember that anger, joy, boredom and love are biological phenomena just like fever and a cough. The same technology that identifies coughs could also identify laughs. If corporations and governments start harvesting our biometric data en masse, they can get to know us far better than we know ourselves, and they can then not just predict our feelings but also manipulate our feelings and sell us anything they want — be it a product or a politician. Biometric monitoring would make Cambridge Analytica’s data hacking tactics look like something from the Stone Age. Imagine North Korea in 2030, when every citizen has to wear a biometric bracelet 24 hours a day. If you listen to a speech by the Great Leader and the bracelet picks up the tell-tale signs of anger, you are done for.
You could, of course, make the case for biometric surveillance as a temporary measure taken during a state of emergency. It would go away once the emergency is over. But temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon. My home country of Israel, for example, declared a state of emergency during its 1948 War of Independence, which justified a range of temporary measures from press censorship and land confiscation to special regulations for making pudding (I kid you not). The War of Independence has long been won, but Israel never declared the emergency over, and has failed to abolish many of the “temporary” measures of 1948 (the emergency pudding decree was mercifully abolished in 2011).
Even when infections from coronavirus are down to zero, some data-hungry governments could argue they needed to keep the biometric surveillance systems in place because they fear a second wave of coronavirus, or because there is a new Ebola strain evolving in central Africa, or because . . . you get the idea. A big battle has been raging in recent years over our privacy. The coronavirus crisis could be the battle’s tipping point. For when people are given a choice between privacy and health, they will usually choose health.
The soap police
Asking people to choose between privacy and health is, in fact, the very root of the problem. Because this is a false choice. We can and should enjoy both privacy and health. We can choose to protect our health and stop the coronavirus epidemic not by instituting totalitarian surveillance regimes, but rather by empowering citizens. In recent weeks, some of the most successful efforts to contain the coronavirus epidemic were orchestrated by South Korea, Taiwan and Singapore. While these countries have made some use of tracking applications, they have relied far more on extensive testing, on honest reporting, and on the willing co-operation of a well-informed public.
Centralised monitoring and harsh punishments aren’t the only way to make people comply with beneficial guidelines. When people are told the scientific facts, and when people trust public authorities to tell them these facts, citizens can do the right thing even without a Big Brother watching over their shoulders. A self-motivated and well-informed population is usually far more powerful and effective than a policed, ignorant population.
Consider, for example, washing your hands with soap. This has been one of the greatest advances ever in human hygiene. This simple action saves millions of lives every year. While we take it for granted, it was only in the 19th century that scientists discovered the importance of washing hands with soap. Previously, even doctors and nurses proceeded from one surgical operation to the next without washing their hands. Today billions of people daily wash their hands, not because they are afraid of the soap police, but rather because they understand the facts. I wash my hands with soap because I have heard of viruses and bacteria, I understand that these tiny organisms cause diseases, and I know that soap can remove them.
But to achieve such a level of compliance and co-operation, you need trust. People need to trust science, to trust public authorities, and to trust the media. Over the past few years, irresponsible politicians have deliberately undermined trust in science, in public authorities and in the media. Now these same irresponsible politicians might be tempted to take the high road to authoritarianism, arguing that you just cannot trust the public to do the right thing.
Normally, trust that has been eroded for years cannot be rebuilt overnight. But these are not normal times. In a moment of crisis, minds too can change quickly. You can have bitter arguments with your siblings for years, but when some emergency occurs, you suddenly discover a hidden reservoir of trust and amity, and you rush to help one another. Instead of building a surveillance regime, it is not too late to rebuild people’s trust in science, in public authorities and in the media. We should definitely make use of new technologies too, but these technologies should empower citizens. I am all in favour of monitoring my body temperature and blood pressure, but that data should not be used to create an all-powerful government. Rather, that data should enable me to make more informed personal choices, and also to hold government accountable for its decisions.
If I could track my own medical condition 24 hours a day, I would learn not only whether I have become a health hazard to other people, but also which habits contribute to my health. And if I could access and analyse reliable statistics on the spread of coronavirus, I would be able to judge whether the government is telling me the truth and whether it is adopting the right policies to combat the epidemic. Whenever people talk about surveillance, remember that the same surveillance technology can usually be used not only by governments to monitor individuals — but also by individuals to monitor governments.
The coronavirus epidemic is thus a major test of citizenship. In the days ahead, each one of us should choose to trust scientific data and healthcare experts over unfounded conspiracy theories and self-serving politicians. If we fail to make the right choice, we might find ourselves signing away our most precious freedoms, thinking that this is the only way to safeguard our health.
We need a global plan
The second important choice we confront is between nationalist isolation and global solidarity. Both the epidemic itself and the resulting economic crisis are global problems. They can be solved effectively only by global co-operation.
First and foremost, in order to defeat the virus we need to share information globally. That’s the big advantage of humans over viruses. A coronavirus in China and a coronavirus in the US cannot swap tips about how to infect humans. But China can teach the US many valuable lessons about coronavirus and how to deal with it. What an Italian doctor discovers in Milan in the early morning might well save lives in Tehran by evening. When the UK government hesitates between several policies, it can get advice from the Koreans who have already faced a similar dilemma a month ago. But for this to happen, we need a spirit of global co-operation and trust.
Countries should be willing to share information openly and humbly seek advice, and should be able to trust the data and the insights they receive. We also need a global effort to produce and distribute medical equipment, most notably testing kits and respiratory machines. Instead of every country trying to do it locally and hoarding whatever equipment it can get, a co-ordinated global effort could greatly accelerate production and make sure life-saving equipment is distributed more fairly. Just as countries nationalise key industries during a war, the human war against coronavirus may require us to “humanise” the crucial production lines. A rich country with few coronavirus cases should be willing to send precious equipment to a poorer country with many cases, trusting that if and when it subsequently needs help, other countries will come to its assistance.
We might consider a similar global effort to pool medical personnel. Countries currently less affected could send medical staff to the worst-hit regions of the world, both in order to help them in their hour of need, and in order to gain valuable experience. If later on the focus of the epidemic shifts, help could start flowing in the opposite direction.
Global co-operation is vitally needed on the economic front too. Given the global nature of the economy and of supply chains, if each government does its own thing in complete disregard of the others, the result will be chaos and a deepening crisis. We need a global plan of action, and we need it fast.
Another requirement is reaching a global agreement on travel. Suspending all international travel for months will cause tremendous hardships, and hamper the war against coronavirus. Countries need to co-operate in order to allow at least a trickle of essential travellers to continue crossing borders: scientists, doctors, journalists, politicians, businesspeople. This can be done by reaching a global agreement on the pre-screening of travellers by their home country. If you know that only carefully screened travellers were allowed on a plane, you would be more willing to accept them into your country.
Unfortunately, at present countries hardly do any of these things. A collective paralysis has gripped the international community. There seem to be no adults in the room. One would have expected to see already weeks ago an emergency meeting of global leaders to come up with a common plan of action. The G7 leaders managed to organise a videoconference only this week, and it did not result in any such plan.
In previous global crises — such as the 2008 financial crisis and the 2014 Ebola epidemic — the US assumed the role of global leader. But the current US administration has abdicated the job of leader. It has made it very clear that it cares about the greatness of America far more than about the future of humanity.
This administration has abandoned even its closest allies. When it banned all travel from the EU, it didn’t bother to give the EU so much as an advance notice — let alone consult with the EU about that drastic measure. It has scandalised Germany by allegedly offering $1bn to a German pharmaceutical company to buy monopoly rights to a new Covid-19 vaccine. Even if the current administration eventually changes tack and comes up with a global plan of action, few would follow a leader who never takes responsibility, who never admits mistakes, and who routinely takes all the credit for himself while leaving all the blame to others.
If the void left by the US isn’t filled by other countries, not only will it be much harder to stop the current epidemic, but its legacy will continue to poison international relations for years to come. Yet every crisis is also an opportunity. We must hope that the current epidemic will help humankind realise the acute danger posed by global disunity.
Humanity needs to make a choice. Will we travel down the route of disunity, or will we adopt the path of global solidarity? If we choose disunity, this will not only prolong the crisis, but will probably result in even worse catastrophes in the future. If we choose global solidarity, it will be a victory not only against the coronavirus, but against all future epidemics and crises that might assail humankind in the 21st century.
Yuval Noah Harari is author of ‘Sapiens’, ‘Homo Deus’ and ‘21 Lessons for the 21st Century’
Follow @FTLifeArts on Twitter to find out about our latest stories first. Listen to our culture podcast, Culture Call, where editors Gris and Lilah dig into the trends shaping life in the 2020s, interview the people breaking new ground and bring you behind the scenes of FT Life & Arts journalism. Subscribe on Apple, Spotify, or wherever you listen.
It should be no surprise that I’m obsessed with science fiction. Considering that I’m both a graphic designer and work in cryptocurrency, it’s practically required that I pay homage to the neon-soaked aesthetics of Blade Runner 2049, have a secret crush on Ava from Ex Machina, and geek out over pretty much anything Neal Stephenson puts out.
However, with a once theoretical dystopia now apparently on our doorstep, we should be considering the trajectory of our civilization now more than ever. Suddenly, the megacorps, oppressive regimes, and looming global crises don’t seem so distant anymore.
What were once just tropes in our favorite works of science fiction are now becoming realities that are impacting our daily lives.
And here we are, wrestling with the implications of our new reality while trapped in our living rooms staring into glowing rectangles straight out of Ready Player One.
Recent events surrounding COVID-19 have put us at a bit of a crossroad. We have an opportunity in front of us now to continue down this path, or use this crisis as a wake up call to pivot our future toward a world that is more equitable, safe, and empowering for all. We are the heroes of our own journey right now.
Our worldview and idea of what is possible is largely shaped by the media we consume. You are what you eat after all. And while the news might inform us, it’s our fiction that inspires us to imagine what is possible.
Science fiction has always asked the big questions, while simultaneously preparing us for what may be around the corner.
Where are we heading?
What problems might we create for ourselves?
And wait…weren’t we promised flying cars?
Through captivating characters, suspenseful plots, and philosophical musings woven throughout, we use fiction above all else to tell great stories and entertain. But there is another purpose, which is to inspire the next generation about what the human mind is capable of and to shape our future for generations to come.
How many engineers got their start after seeing Star Wars? How many interface designers were inspired by Minority Report? Famously, Steve Jobs was inspired to create the iPad after first seeing a concept in 2001: A Space Odyssey.
The world needs this vision more than ever. And while I love the dystopian vibes of cyberpunk aesthetics as much as anyone, is there another world we can create that inspires us (and the next generation) to manifest a more sustainable, equitable, and free future for all?
I’ve recently come across a lesser known genre of science fiction called “solarpunk.” Like cyberpunk, it is a genre of speculative fiction wrapped in a signature aesthetic that paints a vision of the future we could create. The following definition from this reference guide summarizes it well:
Solarpunk is a movement in speculative fiction, art, fashion and activism that seeks to answer and embody the question “what does a sustainable civilization look like, and how can we get there?” The aesthetics of solarpunk merge the practical with the beautiful, the well-designed with the green and wild, the bright and colorful with the earthy and solid. Solarpunk can be utopian, just optimistic, or concerned with the struggles en route to a better world — but never dystopian. As our world roils with calamity, we need solutions, not warnings. Solutions to live comfortably without fossil fuels, to equitably manage scarcity and share abundance, to be kinder to each other and to the planet we share. At once a vision of the future, a thoughtful provocation, and an achievable lifestyle.
Apart from the clear aesthetic differences, a key difference here between solarpunk and cyberpunk is the emphasis on solutions, not warnings.
It appears that solarpunk is not interested in exploring potential paths that may go wrong. Rather, it assumes that the problems are already here and focuses most of its energy on solutions and a path forward. The warnings of cyberpunk tap into the fear of what might happen, and uses that as a premise for creating plot tension. Solarpunk encourages us to accept the reality of the present and move forward by focusing on solutions to the problems at hand.
There are also some clear differentiators on how society is structured and depicted in the two genres.
Economy dominated by large corporations
Environment is usually wrecked, oppressive
Powerful technology has created wealth gap
Drugs used as escape from reality
Man merging with machine
Decentralized symbiotic economic structures
Living in balance with environment
Technology empowers the individual
Drugs used to expand consciousness and augment reality
Man working alongside machine
Sunny with a chance of showers
A big difference here is how humanity chooses to harness the technology we create. Do we use it to evolve ourselves past our current biological form and catapult us toward merging with machines or do we show thoughtful restraint and use technology to bring us more in balance with our own biology and ecosystem?
This is the question for the ages, and yet I don’t think the answer has to be so black and white. In many ways, creating and using technology is the most natural thing that we can do as a species. A beaver gathering sticks to build a dam is no different than a person using an ax to build a roof over their head. The clean lines of an iPhone seem to contrast the squiggly lines of the raw materials it’s made of, but at the end of the day it’s all a byproduct of an exploding supernova.
“We are made of star stuff” — Carl Sagan
Technology does not need to be viewed as an alien phenomenon separating us from nature, but rather as an emergent phenomenon and inevitable byproduct of all natural systems.
Solarpunk ideas remind us that there is a path forward in which we can have our cake and eat it too. We can embrace the exponential rise of our understanding and control over the universe while using that knowledge to ensure that we do not destroy our environment, society and ourselves in the process.
Now I know what you might be thinking, because I am right there with you.
Is this too good to be true? Maybe.
Is reality likely to play out this peacefully? Unlikely.
Should that stop us from trying? No.
It’s called speculative fiction for a reason. It’s not productive to pretend that things will magically fall into place if we put out the right vibes into the universe. We need calculated progress, backing from the hard sciences, and an understanding that compromises and tradeoffs will always have to be made.
The goal of solarpunk is not to wish for a better future, but rather to propagate a series of values, approaches, and awarenesses into our collective psychology that allow us to continue pushing forward with our progress, without sacrificing our own humanity and connection to the natural world in that pursuit.
It is a well known concept that our expectations for the future are guided largely by our predictions of what it will look like. You don’t have to be stoned in a dorm room to think “Dude… the future only looks like the future because that’s what we say the future looks like.”
And yet our visions aren’t always correct. We constantly overestimate what can be done in one year and underestimate what can be done in 10 years. It is clear in drawings from the Victorian era that our predictions for the future are often misguided by our present moment.
When we say something looks futuristic, we are largely comparing that to other artifacts of our present, concept art, and this year’s latest blockbuster. It therefore puts a lot of pressure on the creators shaping our fictional worlds, for they are the first to the front lines in a war of ideas competing to define what the future of our world could and should look like.
Most of our stories about the future are largely dystopian. I understand how important the backdrop of an oppressive regime can be in creating an antagonist you love to hate, or how an experiment gone wrong can set up a hero’s redemption and a captivating plot arc, but I still find myself yearning for a different take on what our future could look like. Are we so sure that our path leads to dystopia that we can’t even explore alternative options, even in our imaginations?
I’m not trying to tell people what they should or should not create. In fact, I believe that our freedom to do so is a liberty that should be fought for at all cost. What I am asking, however, is why we as humans have a tendency to explore only the darkest visions of our future in the stories we tell ourselves? As fun as it is to dream up a techno dystopian future, I’d bet that most of us probably prefer not to live in a world that is oppressed, dangerous, and for some reason always raining.
I believe that, if we can manifest more visions of the future based not in what we are afraid of, but in what we are hopeful for, we’ll be surprised with what we accomplish and who we can inspire.