For the last 60 years or so, science has been running an experiment on itself. The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.
Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”
This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.
(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)
That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.
Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.
The results are in. It failed.
Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.
Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.
It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning physics discoveries from the 1990s and 2000s because there aren’t enough of them.
Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.
What went wrong?
Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?
It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.
In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:
Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.
(When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)
The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶
Here’s another way that we can test whether peer review worked: did it actually earn scientists’ trust?
Scientists often say they take peer review very seriously. But people say lots of things they don’t mean, like “It’s great to e-meet you” and “I’ll never leave you, Adam.” If you look at what scientists actually do, it’s clear they don’t think peer review really matters.
First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal. This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.
Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place.
And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”
Instead, scientists tacitly agree that peer review adds nothing, and they make up their minds about scientific work by looking at the methods and results. Sometimes people say the quiet part loud, like Nobel laureate Sydney Brenner:
I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.
It’s easy to imagine how things could be better—my friend Ethan and I wrote a whole paper on it—but that doesn’t mean it’s easy to make things better. My complaints about peer review were a bit like looking at the ~35,000 Americans who die in car crashes every year and saying “people shouldn’t crash their cars so much.” Okay, but how?
Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’tseem to make them any better. Neither does training them. Maybe we can fix some things on the margins, but remember that right now we’re publishing papers that use capital T’s instead of error bars, so we’ve got a long, long way to go.
What if we made peer review way stricter? That might sound great, but it would make lots of other problems with peer review way worse.
For example, you used to be able to write a scientific paper with style. Now, in order to please reviewers, you have to write it like a legal contract. Papers used to begin like, “Help! A mysterious number is persecuting me,” and now they begin like, “Humans have been said, at various times and places, to exist, and even to have several qualities, or dimensions, or things that are true about them, but of course this needs further study (Smergdorf & Blugensnout, 1978; Stikkiwikket, 2002; von Fraud et al., 2018b)”.
This blows. And as a result, nobody actually reads these papers. Some of them are like 100 pages long with another 200 pages of supplemental information, and all of it is written like it hates you and wants you to stop reading immediately. Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he. “Whenever someone tells me they loved my paper,” he said, “I say thank you, even though I know they didn’t read it.” Stricter peer review would mean even more boring papers, which means even fewer people would read them.
Making peer review harsher would also exacerbate the worst problem of all: just knowing that your ideas won’t count for anything unless peer reviewers like them makes you worse at thinking. It’s like being a teenager again: before you do anything, you ask yourself, “BUT WILL PEOPLE THINK I’M COOL?” When getting and keeping a job depends on producing popular ideas, you can get very good at thought-policing yourself into never entertaining anything weird or unpopular at all. That means we end up with fewer revolutionary ideas, and unless you think everything’s pretty much perfect right now, we need revolutionary ideas real bad.
On the off chance you do figure out a way to improve peer review without also making it worse, you can try convincing the nearly 30,000 scientific journals in existence to apply your magical method to the ~4.7 million articles they publish every year. Good luck!
Peer review doesn’t work and there’s probably no way to fix it. But a little bit of vetting is better than none at all, right?
I say: no way.
Imagine you discover that the Food and Drug Administration’s method of “inspecting” beef is just sending some guy (“Gary”) around to sniff the beef and say whether it smells okay or not, and the beef that passes the sniff test gets a sticker that says “INSPECTED BY THE FDA.” You’d be pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he’s going to miss most of the dangerous meat. This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.
That’s what our current system of peer review does, and it’s dangerous. That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?
If you want to sell a bottle of vitamin C pills in America, you have to include a disclaimer that says none of the claims on the bottle have been evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement on every paper: “NOBODY HAS REALLY CHECKED WHETHER THIS PAPER IS TRUE OR NOT. IT MIGHT BE MADE UP, FOR ALL WE KNOW.” That would at least give people the appropriate level of confidence.
Why did peer review seem so reasonable in the first place?
I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.
But science is a strong-link problem: progress depends on the quality of our best work.Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.
If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.
Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.
Nobody was in charge of our peer review experiment, which means nobody has the responsibility of saying when it’s over. Seeing no one else, I guess I’ll do it:
We’re done, everybody! Champagne all around! Great work, and congratulations. We tried peer review and it didn’t work.
Honesty, I’m so relieved. That system sucked! Waiting months just to hear that an editor didn’t think your paper deserved to be reviewed? Reading long walls of text from reviewers who for some reason thought your paper was the source of all evil in the universe? Spending a whole day emailing a journal begging them to let you use the word “years” instead of always abbreviating it to “y” for no reason (this literally happened to me)? We never have to do any of that ever again.
I know we all might be a little disappointed we wasted so much time, but there’s no shame in a failed experiment. Yes, we should have taken peer review for a test run before we made it universal. But that’s okay—it seemed like a good idea at the time, and now we know it wasn’t. That’s science! It will always be important for scientists to comment on each other’s ideas, of course. It’s just this particular way of doing it that didn’t work.
What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.
Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.
Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lotofcomments in particular. So I dunno, I guess that seems like a good way of doing it?
I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.
For the late French intellectual in an age of ecological crisis it was crucial to understand ourselves as rooted beings.
Adam Tooze
17 October 2022
As Bruno Latour confided to Le Monde earlier this year in one of his final interviews, philosophy was his great intellectual love. But across his long and immensely fertile intellectual life, Latour pursued that love by way of practically every other form of knowledge and pursuit – sociology, anthropology, science, history, environmentalism, political theory, the visual arts, theatre and fiction. In this way he was, above all, a philosopher of life in the comprehensive German sense of Lebensphilosophie.
Lebensphilosophie, whose leading exponents included figures such as Friedrich Nietzsche and Martin Heidegger, enjoyed its intellectual heyday between the 1870s and the 1930s. It was a project that sought to make sense of the dramatic development of modern science and the way it invaded every facet of life. In the process, it relentlessly questioned distinctions between the subject and knowledge and the foundations of metaphysics. It spilled over into the sociology of a Max Weber or the Marxism of a György Lukács. In France, writer-thinkers such as Charles Péguy or Henri Bergson might be counted as advocates of the new philosophy. Their heirs were the existentialists of the 1940s and 1950s. In the Anglophone world, one might think of the American pragmatists, William James and John Dewey, the Bloomsbury group and John Maynard Keynes.
A century later, the project of a “philosophy of life” acquired new urgency for Latour in an age of ecological crisis when it became crucial to understand ourselves not as free-floating knowing and producing subjects, but as rooted, or “landed”, beings living alongside others with all the limits, entanglements and potentials that entailed.
The heretical positions on the status of scientific knowledge for which Latour became notorious for some, are best understood as attempts to place knowledge and truth claims back in the midst of life. In a 2004 essay entitled “How to Talk About the Body?” he imagined a dialogue between a knowing subject as imagined by a naive epistemology and a Latourian subject:
“‘Ah’, sighs the traditional subject [as imagined by simplistic epistemologies], ‘if only I could extract myself from this narrow-minded body and roam through the cosmos, unfettered by any instrument, I would see the world as it is, without words, without models, without controversies, silent and contemplative’; ‘Really?’ replies the articulated body [the Latourian body which recognises its relationship to the world and knowledge about it as active and relational?] with some benign surprise, ‘why do you wish to be dead? For myself, I want to be alive and thus I want more words, more controversies, more artificial settings, more instruments, so as to become sensitive to even more differences. My kingdom for a more embodied body!’”
The classical subject-object distinction traps the knowing subject in a disembodied, unworldly position that is, in fact, tantamount to death. As Latour wrote in a brilliant passage in the same essay on the training of noses, the expert smell-testers who gauge perfume, or tea or wine: “A direct and unmediated access to the primary qualities of odours could only be detected by a bodiless nose.” But what kind of image of knowledge is this? “[T]he opposite of embodied is dead, not omniscient.”
For a Burgundian – Latour was born in 1947 into a storied family of wine négociant in Beaune – this was an obvious but profound truth. To really know something, the way a good Burgundian knows wine, means not to float above the world, but to be a porous part of it, inhaling, ingesting fermentation and the chemical elements of the terroir, the irreducibly specific terrain.
For Latour, claims to meaningful knowledge, including scientific knowledge, were generated not by simple rules and procedures that could be endlessly repeated with guaranteed results, but through immersion in the world and its particularities. This implied an existential engagement: “Knowing interestingly is always a risky business,” he wrote, “which has to be started from scratch for any new proposition at hand.” What made for generative scientific discovery was not the tautological reproduction of a state of affairs by a “true” statement, but the “fecundity, productivity, richness, originality” of good articulations. Distinctions between true and false were, more often than not, banal. Only anxious epistemologists and methodologists of science worried about those. What mattered to actual scientific practice was whether a claim was “boring”, “repetitive”, “redundant”, “inelegant”, “simply accurate”, “sterile”.
If Latour was a sceptic when it came to naive claims of “detached” scientific knowledge, this also applied doubly to naive sociologies of knowledge. Critical analyses of power, whether anti-capitalist, feminist or postcolonial, were productive and inspiring. But unless it was subject symmetrically to the same critique to which Latour subjected naive claims to scientific knowledge, social theory, even that which proclaimed itself to be critical theory, could all too easily become a snare. If the relationship of life and knowledge was the problem, then, you could not cut through that Gordian knot by invoking sociology to explain physics. What was sociology, after all, but a form of organised social knowledge? For better or for worse, all you were doing in such an exercise was multiplying the articulations from one scientific discipline to another and not necessarily in a helpful or illuminating direction.
In refusing the inherited authority of the 19th and early 20th-century canon of critical social science, Latour sought to create a form of knowledge more adequate to the late 20th and early 21st centuries. Latour thus belongs alongside Michel Foucault and Gilles Deleuze and Félix Guattari as one of the French thinkers who sought to escape the long shadow of Marxism, whether in its Hegelian (Sartre) or its anti-Hegelian (Althusser) varieties.
In place of an overly substantive notion of “the economy” or “society”, Latour proposed the looser conception of actor-networks. These are assemblages of tools, resources, researchers, means of registering concepts, and doing things that are not a priori defined in terms of a “mode of production” or a particular social order. Think of the lists of interconnected objects, systems and agents that have held our attention in the past few years: shipping containers, the flow of rainwater in Taiwan, giant freighters stuck sideways in the Suez Canal driven off course by unpredictable currents and side winds. Each of these supply chain crises has exposed actor-networks, of which we were previously oblivious. During such moments we are forced to ask: what is macro and what is micro? What is base and what is superstructure? These are Latourian questions.
One of the productive effects of seeing the world this way is that it becomes irresistibly obvious that all sorts of things have agency. This realisation is disturbing because it seems to downgrade the privilege of actual human existence and the social relations between people. But Latour’s point was never to diminish the human, but instead to emphasise the complex array of forces and agencies that are entailed in our modern lives. Our existence, Latour tried to show, depends not on the simple structures that we imagined modernity to consist of – markets, states and so on – but on the multiplication of what he calls hybrids, “supply chains” in the widest sense of the word.
Latour was not a class militant. But that does not mean that he did not have a cause. His lifelong campaign was for modernity to come to consciousness of itself, to stop taking its own simplifications at face value, to recognise the confusions and hybridity that it creates and endlessly feeds off. His mission was to persuade us, as the title of his most widely read book has it, that We Have Never Been Modern (1991). The confusion of a world in which lipid bubbles, aerosols and face masks have occupied our minds for years is what Latour wanted to prepare us for.
What Latour sought to expose was the pervasive animism that surrounds us in the form of hybrid actor-networks, whose force and significance we consistently deny. “Hybrids are everywhere,” he said, “but the question is how do you tame them, or do you explicitly recognise their strengths, which is part of the animist power of objects?” What Latour diagnosed is that modernity, as part of its productive logic, systematically denies this animation of the material world. “Modernism is the mode of life that finds the soul with which matter would be endowed, the animation, shocking.”
This repression of hybrid, animated material reality, is exposed in the often-racialised embarrassment of those who believe themselves modern when they encounter human civilisations that make no secret of their animist beliefs. It also accounts for the embarrassment triggered among true believers in modern science and its ideology by the revelations of the best histories of science, such as those by Simon Schaffer, to whom Latour owed a great debt. To Latour’s delight Schaffer showed how Isaac Newton, in the first instance, saw in gravity the manifestation of the power of angels.
The modernist impulse is to dismiss such ideas as hangovers of an earlier religious world-view and to relegate African art to the anthropology museum. But at the risk of provocation and scandal, Latour’s response was the opposite. Rather than finishing the purification of modernity and expunging angels and animism from our view of the forces that move the world, he urged that we should open our ontology to encompass the giant dark matter of hybrid concepts and real networks that actually sustain modern life.
From the 1990s onwards this made Latour one of the foremost thinkers in the ecological movement. And once again he reached for the most radical and encompassing animist notion with which to frame that commitment – the Gaia concept, which postulates the existence of a single overarching living being, encompassing global ecology. This is an eerie, supernatural, non-modern idea. But for Latour, if we settle for any more mundane description of the ecological crisis – if we fit the environment into pre-existing cost-benefit models as economists often do – we fail to recognise the radicalism of the forces that we have unleashed. We fail to understand the peril that we are in: that Gaia will lose patience and toss us, snarling, off her back.
Latour’s emphatic embrace of life, plenitude and articulation did not mean that he shrank from finitude or death. Rather the opposite. It is only from a thoroughly immanent view that you truly feel the weight of life lived towards its end, and the mysterious and awesome finality that is death. It is only from an embrace of life as emphatic as Latour’s, that you truly register the encroachment of deadening forces of the mind and the body. For Latour, life and death were intertwined by the effort of those left behind to make sense of death, by every means at their disposal, sometimes at very long distance.
In September 1976 the body of Ramesses II, the third pharaoh of the 19th Dynasty of Egypt, was flown to Paris. He was welcomed with the full military honours appropriate for a great ruler, and then his body was whisked to the laboratory to be subject to medical-forensic examination. For Latour this fantastic juxtaposition of the ancient and the modern was an irresistible provocation. The naive position was that the scientists discovered that Ramesses died of tuberculosis 3,000 years ago. He was also, a racially minded police forensic scientist claimed, most likely a redhead. For Latour, the question was more basic. How can we debate claims made self-confidently about a death that took place thousands of years ago? We were not there. There was no modern medical science then. When Ramesses ceased to live, TB was not even a “thing”. It was not until 1882 that Robert Koch in Berlin identified the bacillus. And even then, no one could have made any sensible claim about Ramesses. Making the naive, apparently matter-of-fact claim – that Ramesses died of TB in 1213 BC – in fact involves giant leaps of the imagination.
What we do know and can debate are what Latour would call “articulations”. We know that as a result of the intervention of the French president Valéry Giscard D’Estaing the Egyptian authorities were prevailed upon to allow the decaying mummy to be flown to Paris for preservation. We know that in Paris, what was left of the body was enrolled in modern technoscientific systems and testing procedures leading us to venture hypotheses about the cause of death in the distant past. Every single one of those “articulations” can be tested, probed and thereby multiplied. Entire bodies of thought can be built on different hypotheses about the corpse. So, Latour maintained, rather than those who assertively claim to know what actually happened 3,000 years ago, the journalist who declared vertiginously that Ramesses had (finally) died of TB in 1976 came closer to the truth in registering both the gulf that separates us from an event millennia in the past and the radical historical immanence of our current diagnosis. In his effort to shake us out of the complacent framework of certainty that modernity had created around us, counter-intuitive provocations of this kind were part of Latour’s method.
Unlike Ramesses’ cause of death, Bruno Latour’s was well mapped. In the 21st century, a cancer diagnosis has immediate and drastic implications. It enrols you as a patient in the machinery of the medical-industrial complex. Among all the hybrids that modern societies have created, the medical apparatus is one of the most complex. It grows ever larger and imposes its urgency in a relentless and merciless fashion. If you take your critical vantage point from an early 20th-century theorist of alienation, like Lukács or Weber for instance, it is tempting to think of this technoscientific medical apparatus as a steel-hard cage that relentlessly objectifies its patients, as bodies and cases. But for Latour, this again falls into a modernist trap. To start from the premise that objectification is actually achieved is to misunderstand and to grant too much. “Reductionism is not a sin for which scientists should make amends, but a dream precisely as unreachable as being alive and having no body. Even the hospital is not able to reduce the patient to a ‘mere object’.”
Rather than reducing us, modern medicalisation multiplies us. “When you enter into contact with hospitals, your ‘rich subjective personality’ is not reduced to a mere package of objective meat: on the contrary, you are now learning to be affected by masses of agencies hitherto unknown not only to you, but also to doctors, nurses, administration, biologists, researchers who add to your poor inarticulate body complete sets of new instruments.” The body becomes a site of a profuse multiplicity: “How can you contain so much diversity, so many cells, so many microbes, so many organs, all folded in such a way that ‘the many act as one’, as [Alfred North] Whitehead said? No subjectivity, no introspection, no native feeling can be any match for the fabulous proliferation of affects and effects that a body learns when being processed by a hospital… Far from being less, you become more.”
It’s a brave image. Perhaps it was one that sustained Latour as the cancer and the agencies deployed to fight it laid waste to his flesh. Not for nothing people describe the illness as a battle. Like a war, it can go on for years.
Latour liked military images. Perhaps because they better captured his vision of history, as mysterious, opaque, complex and contingent. Military history is one area of the modern world in which even the most high-minded analysts end up talking about tanks, bridges, rivers, Himars, Javelins and the fog of war. In the end, it is often for want of nails that battles are lost. The original French title of Latour’s famous book on the 19th-century French microbiologist Louis Pasteur – Pasteur: guerre et paix des microbes suivi de Irréductions – paid homage to Tolstoy. In the English translation that reference was lost. The Pasteurization of France (1988) replaces the French’s titles nod to War and Peace with ugly sociologese.
Latour’s own life force was strong. In his apartment on Rue Danton, Paris, with the charred remains of Notre Dames in background, he shared wines with visitors from around the world from vineyards planted in response to climate change. Covid lockdowns left him impatient. As soon as global traffic resumed, in 2021 he was assisting in the curation of the Taipei biennial. Latour’s final book, After Lockdown: A Metamorphosis appeared in English in 2021. It carries his voice into the present inviting us to imagine ourselves in an inversion of Kafka’s fable, as happy termites emerging from the lockdown on six hairy legs. “With your antennae, your articulations, your emanations, your waste matter, your mandibles, your prostheses, you may at last be becoming a human being!” No longer ill at ease, “Nothing is alien to you anymore; you’re no longer alone; you quietly digest a few molecules of whatever reaches your intestines, after having passed through the metabolism of hundreds of millions of relatives, allies, compatriots and competitors.”
As he aged, Latour became more, not less radical. Often dismissed on the left for his scepticism about classical critical social theory, the ecological turn made Latour into nothing less than an eco-warrior. His cause was the overturning of the dream world that systematically failed to recognise or grasp the forces unleashed by the modernist apparatus of production and cognition. We needed to come down to Earth, to land. Only then could we begin the hard work, with other actors, of arriving at a sustainable modus vivendi. The urgency was that of war and his mobilisation was total. The range of projects that he spawned in recent decades – artistic, political, intellectual – was dizzying. All of them aimed to find new political forms, new parliaments, new articulations.
Unlike many commentators and politicians, in response to populism, and specifically the gilet jaunes protests of 2018, Latour did not retreat to higher levels of technocracy, but instigated a collective project to compile cahiers de doléance – books of complaint – like those assembled before the French Revolution of 1789. The aim was to enrol people from all walks of life in defining what they need to live and what threatened their livelihood.
Part of the project involved an interactive theatrical exercise enacted by Latour with the architect and performance-art impresario Soheil Hajmirbaba. In a kind of ritual game, the participants arranged themselves and the forces enabling and threatening their lives – ranging from sea level rise to the increased prices for diesel – on a circular stage marked out with a compass. It was, as Latour described it, “like a children’s game, light-hearted and a lot of fun. And yet, when you get near the middle, everyone gets a bit nervous… The centre of the crucible, where I timidly put my feet, is the exact intersection of a trajectory – and I’m not in the habit of thinking of myself as a vector of a trajectory – which goes from the past, all that I’ve benefited from so as to exist, to grow, sometimes without even realising it, on which I unconsciously count and which may well stop with me, through my fault, which won’t go towards the future anymore, because of all that threatens my conditions of existence, of which I was also unaware.”
“The amazing result of this little enactment,” he continued, “is that you’re soon surrounded by a small assembly, which nonetheless represents your most personal situation, in front of the other participants. The more attachments you list, the more clearly you are defined. The more precise the description, the more the stage fills up!… A woman in the group sums it up in one phrase: ‘I’m repopulated!’”
Thus, Latour reinvented the role of the engaged French intellectual for the 21st century. And in doing so he forced the follow-on question. Was he perhaps the last of his kind? Who comes after him? As far as intellectual standing is concerned, Latour would have been impatient with the question. He was too preoccupied with new problems and projects, too enthused by the networks of collaborators, young and old whose work he drew on and that he helped to energise. But in a more general sense the question of succession haunted him. That, after all, is the most basic issue posed by the ecological crisis. What comes after us? What is our responsibility to the continuity of life?
In his effort to enact the motion of coming down to Earth, Latour faced the question head on. “With my feet on the consortium’s compass, I consult myself: in terms of my minuscule actions, do I enhance or do I stifle the lives of those I’ve benefited from till now?” Asking that question, never content with complacent or self-satisfied answers, during the night of 8-9 October 2022, Bruno Latour died aged 75 in Paris, of pancreatic cancer.
We are suffering through a pandemic of lies — or so we hear from leading voices in media, politics, and academia. Our culture is infected by a disease that has many names: fake news, post-truth, misinformation, disinformation, mal-information, anti-science. The affliction, we are told, is a perversion of the proper role of knowledge in a healthy information society.
What is to be done? To restore truth, we need strategies to “get the facts straight.” For example, we need better “science communication,” “independent fact-checking,” and a relentless commitment to exposing and countering falsehoods. This is why the Washington Post fastidiously counted 30,573 “false or misleading claims” by President Trump during his four years in office. Facebook, meanwhile, partners with eighty organizations worldwide to help it flag falsehoods and inform users of the facts. And some disinformation experts recently suggested in the New York Times that the Biden administration should appoint a “reality czar,” a central authority tasked with countering conspiracy theories about Covid and election fraud, who “could become the tip of the spear for the federal government’s response to the reality crisis.”
Such efforts reflect the view that untruth is a plague on our information society, one that can and must be cured. If we pay enough responsible, objective attention to distinguishing what is true from what is not, and thus excise misinformation from the body politic, people can be kept safe from falsehood. Put another way, it is an implicitly Edenic belief in the original purity of the information society, a state we have lapsed from but can yet return to, by the grace of fact-checkers.
We beg to differ. Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested.
This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us.
Truth, New and Improved
The emergence of widespread challenges to the control of information by mainstream social institutions developed in three phases.
First, new technologies of mass communication in the twentieth century — radio, television, and significant improvements in printing, further empowered by new social science methods — enabled the rise of mass-market advertising, which quickly became an essential tool for success in the marketplace. Philosophers like Max Horkheimer and Theodor Adorno were bewildered by a world where, thanks to these new forms of communication, unabashed lies in the interest of selling products could become not just an art but an industry.
The rise of mass marketing created the cultural substrate for the so-called post-truth world we live in now. It normalized the application of hyperbole, superlatives, and untestable claims of superiority to the rhetoric of everyday commerce. What started out as merely a way to sell new and improved soap powder and automobiles amounts today to a rhetorical infrastructure of hype that infects every corner of culture: the way people promote their careers, universities their reputations, governments their programs, and scientists the importance of their latest findings. Whether we’re listening to a food corporation claim that its oatmeal will keep your heart healthy or a university press office herald a new study that will upend everything we know, radical skepticism would seem to be the rational stance for information consumers.
Politics, Scientized
In a second, partly overlapping phase in the twentieth century, science underwent a massive expansion of its role into the domain of public affairs, and thus into highly contestable subject matters. Spurred by a wealth of new instruments for measuring the world and techniques for analyzing the resulting data, policies on agriculture, health, education, poverty, national security, the environment and much more became subject to new types of scientific investigation. As never before, science became part of the language of policymaking, and scientists became advocates for particular policies.
The dissolving boundary between science and politics was on full display by 1958, when the chemist Linus Pauling and physicist Edward Teller debated the risks of nuclear weapons testing on a U.S. television broadcast, a spectacle that mixed scientific claims about fallout risks with theories of international affairs and assertions of personal moral conviction. The debate presaged a radical transformation of science and its social role. Where science was once a rarefied, elite practice largely isolated from society, scientific experts were now mobilized in increasing numbers to form and inform politics and policymaking. Of course, society had long been shaped, sometimes profoundly, by scientific advances. But in the second half of the twentieth century, science programs started to take on a rapidly expanding portfolio of politically divisive issues: determining the cancer-causing potential of food additives, pesticides, and tobacco; devising strategies for the U.S. government in its nuclear arms race against the Soviet Union; informing guidelines for diet, nutrition, and education; predicting future energy supplies, food supplies, and population growth; designing urban renewal programs; choosing nuclear waste disposal sites; and on and on.
Philosopher-mathematicians Silvio Funtowicz and Jerome Ravetz recognized in 1993 that a new kind of science was emerging, which they termed “post-normal science.” This kind of science was inherently contestable, both because it dealt with the irreducible uncertainties of complex and messy problems at the intersection of nature and society, and because it was being used for making decisions that were themselves value-laden and contested. Questions that may sound straightforward, such as “Should women in their forties get regular mammograms?” or “Will genetically modified crops and livestock make food more affordable?” or “Do the benefits of decarbonizing our energy production outweigh the costs?” became the focus of intractable and never-ending scientific and political disputes.
This situation remained reasonably manageable through the 1990s, because science communication was still largely controlled by powerful institutions: governments, corporations, and universities. Even if these institutions were sometimes fiercely at odds, all had a shared interest in maintaining the idea of a unitary science that provided universal truths upon which rational action should be based. Debates between experts may have raged — often without end — but one could still defend the claim that the search for truth was a coherent activity carried out by special experts working in pertinent social institutions, and that the truths emerging from their work would be recognizable and agreed-upon when finally they were determined. Few questioned the fundamental notion that science was necessary and authoritative for determining good policy choices across a wide array of social concerns. The imperative remained to find facts that could inform action — a basic tenet of Enlightenment rationality.
Science, Democratized
The rise of the Internet and social media marks the third phase of the story, and it has now rendered thoroughly implausible any institutional monopoly on factual claims. As we are continuing to see with Covid, the public has instantly available to it a nearly inexhaustible supply of competing and contradictory claims, made by credentialed experts associated with august institutions, about everything from mask efficacy to appropriate social distancing and school closure policies. And many of the targeted consumers of these claims are already conditioned to be highly skeptical of the information they receive from mainstream media.
Today’s information environment certainly invites mischievous seeding of known lies into public discourse. But bad actors are not the most important part of the story. Institutions can no longer maintain their old stance of authoritative certainty about information — the stance they need to justify their actions, or to establish a convincing dividing line between true news and fake news. Claims of disinterest by experts acting on behalf of these institutions are no longer plausible. People are free to decide what information, and in which experts, they want to believe. The Covid lab-leak hypothesis was fake news until that news itself became fake. Fact-checking organizations are themselves now subject to accusations of bias: Recently, Facebook flagged as “false” a story in the esteemed British Medical Journal about a shoddy Covid vaccine trial, and the editors of the journal in turn called Facebook’s fact-checking “inaccurate, incompetent and irresponsible.”
No political system exists without its share of lies, obfuscation, and fake news, as Plato and Machiavelli taught. Yet even those thinkers would be puzzled by the immense power of modern technologies to generate stories. Ideas have become a battlefield, and we are all getting lost in the fog of the truth wars. When everything seems like it can be plausible to someone, the term “fake news” loses its meaning.
iStock
The celebrated expedient that an aristocracy has the right and the mission to offer “noble lies” to the citizens for their own good thus looks increasingly impotent. In October 2020, U.S. National Institutes of Health director Francis Collins, a veritable aristocrat of the scientific establishment, sought to delegitimize the recently released Great Barrington Declaration. Crafted by a group he referred to as “fringe epidemiologists” (they were from Harvard, Stanford, and Oxford), the declaration questioned the mainstream lockdown approach to the pandemic, including school and business closures. “There needs to be a quick and devastating published take down,” Collins wrote in an email to fellow aristocrat Anthony Fauci.
But we now live in a moment where suppressing that kind of dissent has become impossible. By May 2021, that “fringe” became part of a new think tank, the Brownstone Institute, founded in reaction to what they describe as “the global crisis created by policy responses to the Covid-19 pandemic.” From this perspective, policies advanced by Collins and Fauci amounted to “a failed experiment in full social and economic control” reflecting “a willingness on the part of the public and officials to relinquish freedom and fundamental human rights in the name of managing a public health crisis.” The Brownstone Institute’s website is a veritable one-stop Internet shopping haven for anyone looking for well-credentialed expert opinions that counter more mainstream expert opinions on Covid.
Similarly, claims that the science around climate change is “settled,” and that therefore the world must collectively work to decarbonize the global energy system by 2050, have engendered a counter-industry of dissenting experts, organizations, and websites.
At this point, one might be forgiven for speculating that the public is being fed such a heavy diet of Covid and climate change precisely because these are problems that have been framed politically as amenable to a scientific treatment. But it seems that the more the authoritiesinsist on the factiness of facts, the more suspect these become to larger and larger portions of the populace.
A Scientific Reformation
The introduction of the printing press in the mid-fifteenth century triggered a revolution in which the Church lost its monopoly on truth. Millions of books were printed in just a few decades after Gutenberg’s innovation. Some people held the printing press responsible for stoking collective economic manias and speculative bubbles. It allowed the widespread distribution of astrological almanacs in Europe, which fed popular hysteria around prophesies of impending doom. And it allowed dissemination of the Malleus Maleficarum, an influential treatise on demonology that contributed to rising persecution of witches.
Though the printing press allowed sanctioned ideas to spread like never before, it also allowed the spread of serious but hitherto suppressed ideas that threatened the legitimacy of the Church. A range of alternative philosophical, moral, and ideological perspectives on Christianity became newly accessible to ever-growing audiences. So did exposés of institutional corruption, such as the practice of indulgences — a market for buying one’s way out of purgatory that earned the Church vast amounts of money. Martin Luther, in particular, understood and exploited the power of the printing press in pursuing his attacks on the Church — one recent historical account, Andrew Pettegree’s book Brand Luther, portrays him as the first mass-market communicator.
“Beginning of the Reformation”: Martin Luther directs the posting of his Ninety-five Theses, protesting the practice of the sale of indulgences, to the door of the castle church in Wittenberg on October 31, 1517. W. Baron von Löwenstern, 1830 / Library of Congress
To a religious observer living through the beginning of the Reformation, the proliferation of printed material must have appeared unsettling and dangerous: the end of an era, and the beginning of a threatening period of heterodoxy, heresies, and confusion. A person exposed to the rapid, unchecked dispersion of printed matter in the fifteenth century might have called many such publications fake news. Today many would say that it was the Reformation itself that did away with fake news, with the false orthodoxies of a corrupted Church, opening up a competition over ideas that became the foundation of the modern world. Whatever the case, this new world was neither neat nor peaceful, with the religious wars resulting from the Church’s loss of authority over truth continuing until the mid-seventeenth century.
Like the printing press in the fifteenth century, the Internet in the twenty-first has radically transformed and disrupted conventional modes of communication, destroyed the existing structure of authority over truth claims, and opened the door to a period of intense and tumultuous change.
Those who lament the death of truth should instead acknowledge the end of a monopoly system. Science was the pillar of modernity, the new privileged lens to interpret the real world and show a pathway to collective good. Science was not just an ideal but the basis for a regime, a monopoly system. Within this regime, truth was legitimized in particular private and public institutions, especially government agencies, universities, and corporations; it was interpreted and communicated by particular leaders of the scientific community, such as government science advisors, Nobel Prize winners, and the heads of learned societies; it was translated for and delivered to the laity in a wide variety of public and political contexts; it was presumed to point directly toward right action; and it was fetishized by a culture that saw it as single and unitary, something that was delivered by science and could be divorced from the contexts in which it emerged.
Such unitary truths included above all the insistence that the advance of science and technology would guarantee progress and prosperity for everyone — not unlike how the Church’s salvific authority could guarantee a negotiated process for reducing one’s punishment for sins. To achieve this modern paradise, certain subsidiary truths lent support. One, for example, held that economic rationality would illuminate the path to universal betterment, driven by the principle of comparative advantage and the harmony of globalized free markets. Another subsidiary truth expressed the social cost of carbon emissions with absolute precision to the dollar per ton, with the accompanying requirement that humans must control the global climate to the tenth of a degree Celsius. These ideas are self-evidently political, requiring monopolistic control of truth to implement their imputed agendas.
An easy prophesy here is that wars over scientific truth will intensify, as did wars over religious truth after the printing press. Those wars ended with the Peace of Westphalia in 1648, followed, eventually, by the creation of a radically new system of governance, the nation-state, and the collapse of the central authority of the Catholic Church. Will the loss of science’s monopoly over truth lead to political chaos and even bloodshed? The answer largely depends upon the resilience of democratic institutions, and their ability to resist the authoritarian drift that seems to be a consequence of crises such as Covid and climate change, to which simple solutions, and simple truths, do not pertain.
Both the Church and the Protestants enthusiastically adopted the printing press. The Church tried to control it through an index of forbidden books. Protestant print shops adopted a more liberal cultural orientation, one that allowed for competition among diverse ideas about how to express and pursue faith. Today we see a similar dynamic. Mainstream, elite science institutions use the Internet to try to preserve their monopoly over which truths get followed where, but the Internet’s bottom-up, distributed architecture appears to give a decisive advantage to dissenters and their diverse ideologies and perspectives.
Holding on to the idea that science always draws clear boundaries between the true and the false will continue to appeal strongly to many sincere and concerned people. But if, as in the fifteenth century, we are now indeed experiencing a tumultuous transition to a new world of communication, what we may need is a different cultural orientation toward science and technology. The character of this new orientation is only now beginning to emerge, but it will above all have to accommodate the over-abundance of competing truths in human affairs, and create new opportunities for people to forge collective meaning as they seek to manage the complex crises of our day.
A relação entre o conhecimento genuíno e as doutrinas marginais é mais próxima do muitos querem aceitar, diz historiador especialista em história da ciência
Para as instituições científicas, essas práticas e movimentos enquadram-se na categoria das “pseudociências”. Ou seja, doutrinas baseadas em fundamentos que seus adeptos consideram científicas e, a partir daí, criam uma corrente que se afasta do que é normalmente aceito pelo mundo acadêmico.
Mas como distinguir o que é ciência daquilo que se faz passar por ciência?
Essa tarefa é muito mais complicada do que parece, segundo Michael Gordin, professor da Universidade Princeton, nos Estados Unidos, e especialista em história da ciência. Gordin é autor do livro On the Fringe: Where Science Meets Pseudoscience (“Na Fronteira: Onde a Ciência Encontra a Pseudociência”, em tradução livre).
Seu livro detalha como operam as pseudociências e como, do seu ponto de vista, são uma consequência inevitável do progresso científico.
Em entrevista à BBC News Mundo (o serviço em espanhol da BBC), Gordin detalha a complexa relação entre o que se considera ciência verdadeira e o que ele chama de doutrinas marginais.
Michael Gordin, autor do livro “Na Fronteira: Onde a Ciência Encontra a Pseudociência” (em tradução livre do inglês)
BBC News Mundo – O senhor afirma que não existe uma linha definida separando a ciência da pseudociência, mas a ciência tem um método claro e comprovável. Esta não seria uma diferença clara com relação à pseudociência?
Michael Gordin – Acredita-se normalmente que a ciência tem um único método, mas isso não é verdade. A ciência tem muitos métodos. Os geólogos fazem seu trabalho de forma muito diferente dos físicos teóricos, e os biólogos moleculares, dos neurocientistas. Alguns cientistas trabalham no campo, observando o que acontece. Outros trabalham em laboratório, sob condições controladas. Outros fazem simulações. Ou seja, a ciência tem muitos métodos, que são heterogêneos. A ciência é dinâmica, e esse dinamismo dificulta a definição dessa linha. Podemos tomar um exemplo concreto e dizer que se trata de ciência ou de pseudociência. É fácil com um exemplo concreto.
O problema é que essa linha não é consistente e, quando você observa uma maior quantidade de casos, haverá coisas que antes eram consideradas ciência e agora são consideradas pseudociências, como a astrologia. Existem temas como a deriva dos continentes, que inicialmente era considerada uma teoria marginal e agora é uma teoria básica da geofísica.
Quase tudo o que hoje se considera pseudociência já foi ciência no passado, que foi refutada com o passar do tempo e os que continuam a apoiá-la são considerados lunáticos ou charlatães. Ou seja, a definição do que é ciência ou pseudociência é dinâmica ao longo do tempo. Esta é uma das razões da dificuldade desse julgamento.
Considerada ciência no passado, a astrologia encontra-se hoje no rol das pseudociências – ou doutrinas marginais, segundo Michael Gordin
BBC News Mundo – Mas existem coisas que não se alteram ao longo do tempo. Por exemplo, 2+2 sempre foi igual a 4. Isso quer dizer que a ciência trabalha com base em princípios que não permitem interpretações…
Gordin – Bem, isso não é necessariamente certo. Dois óvnis mais dois óvnis são quatro óvnis.
É interessante que você tenha escolhido a matemática que, de fato, não é uma ciência empírica, pois ela não se refere ao mundo exterior. É uma série de regras que usamos para determinar certas coisas.
Uma das razões pelas quais é muito complicado fazer a distinção é o fato de que as doutrinas marginais observam o que é considerado ciência estabelecida e adaptam a elas seus argumentos e suas técnicas.
Um exemplo é o “criacionismo científico”, que defende que o mundo foi criado em sete dias, 6.000 anos atrás. Existem publicações de criacionismo científico que incluem gráficos matemáticos sobre as razões de decomposição de vários isótopos, para tentar comprovar que a Terra tem apenas 6.000 anos.
Seria genial afirmar que usar a matemática e apresentar gráficos é ciência, mas a realidade é que quase todas as doutrinas marginais usam a matemática de alguma forma.
Os cientistas discordam sobre o tipo de matemática utilizada, mas existem, por exemplo, pessoas que defendem que a matemática avançada utilizada na teoria das cordas já não é científica, porque perdeu a verificação empírica. Trata-se de matemática de alto nível, feita por doutores das melhores universidades, mas existe um debate interno na ciência, entre os físicos, que discutem se ela deve ou não ser considerada ciência.
Não estou dizendo que todos devem ser criacionistas, mas, quando a mecânica quântica foi proposta pela primeira vez, algumas pessoas diziam: “isso parece muito estranho”, “ela não se atém às medições da forma em que acreditamos que funcionem” ou “isso realmente é ciência?”
Nos últimos anos, popularizou-se entre alguns grupos a ideia de que a Terra é plana
BBC News Mundo – Então o sr. afirma que as pseudociências ou doutrinas marginais têm algum valor?
Gordin – A questão é que muitas coisas que consideramos inovadoras provêm dos limites do conhecimento ortodoxo.
O que quero dizer são basicamente três pontos: primeiro, que não existe uma linha divisória clara; segundo, que compreender o que fica de cada lado da linha exige a compreensão do contexto; e, terceiro, que o processo normal da ciência produz doutrinas marginais.
Não podemos descartar essas doutrinas, pois elas são inevitáveis. Elas são um produto derivado da forma como as ciências funcionam.
BBC News Mundo – Isso significa que deveríamos ser mais tolerantes com as pseudociências?
Gordin – Os cientistas, como qualquer outra pessoa, têm tempo e energia limitados e não podem pesquisar tudo.
Por isso, qualquer tempo que for dedicado a refutar ou negar a legitimidade de uma doutrina marginal é tempo que deixa de ser usado para fazer ciência — e talvez nem surta resultados.
As pessoas vêm refutando o criacionismo científico há décadas. Elas trataram de desmascarar a telepatia por ainda mais tempo e ela segue rondando à nossa volta. Existem diversos tipos de ideias marginais. Algumas são muito politizadas e chegam a ser nocivas para a saúde pública ou o meio ambiente. É a estas, a meu ver, que precisamos dedicar atenção e recursos para sua eliminação ou pelo menos explicar por que elas estão erradas.
Mas não acho que outras ideias, como acreditar em óvnis, sejam especificamente perigosas. Acredito que nem mesmo o criacionismo seja tão perigoso como ser antivacinas, ou acreditar que as mudanças climáticas são uma farsa.
Devemos observar as pseudociências como algo inevitável e abordá-las de forma pragmática. Temos uma quantidade de recursos limitada e precisamos escolher quais doutrinas podem causar danos e como enfrentá-las.
Devemos simplesmente tratar de reduzir os danos que elas podem causar? Esse é o caso da vacinação obrigatória, cujo objetivo é evitar os danos, mas sem necessariamente convencer os opositores que eles estão equivocados. Devemos persuadi-los de que estão equivocados? Isso precisa ser examinado caso a caso.
Existem em várias partes do mundo grupos que se opõem às vacinas contra a covid-19
BBC News Mundo – Como então devemos lidar com as pseudociências?
Gordin – Uma possibilidade é reconhecer que são pessoas interessadas na ciência.
Um terraplanista, por exemplo, é uma pessoa interessada na configuração da Terra. Significa que é alguém que teve interesse em pesquisar a natureza e, por alguma razão, seguiu a direção incorreta.
Pode-se então perguntar por que isso aconteceu. Pode-se abordar a pessoa, dizendo: “se você não acredita nesta evidência, em qual tipo de evidência você acreditaria?” ou “mostre-me suas evidências e vamos conversar”.
É algo que poderíamos fazer, mas vale a pena fazê-lo? É uma doutrina que não considero perigosa. Seria um problema se todos os governos do mundo pensassem que a Terra é plana, mas não vejo esse risco.
A versão contemporânea do terraplanismo surgiu há cerca de 15 anos. Acredito que os acadêmicos ainda não compreendem muito bem como aconteceu, nem por que aconteceu tão rápido.
Outra coisa que podemos fazer é não necessariamente persuadi-los de que estão equivocados, porque talvez eles não aceitem, mas tentar entender como esse movimento surgiu e se expandiu. Isso pode nos orientar sobre como enfrentar ameaças mais sérias.
As pessoas que acreditam nas doutrinas marginais muitas vezes tomam elementos da ciência estabelecida para traçar suas conclusões
BBC News Mundo – Ameaças mais sérias como os antivacinas…
Gordin – As vacinas foram inventadas no século 18, sempre houve pessoas que se opusessem a elas, em parte porque todas as vacinas apresentam risco, embora seja muito baixo.
Ao longo do tempo, a forma como se lidou com a questão foi a instituição de um sistema de seguro que basicamente diz o seguinte: você precisa receber a vacina, mas se você receber e tiver maus resultados, nós compensaremos você por esses danos.
Tenho certeza de que isso ocorrerá com a vacina contra a covid, mas ainda não conhecemos todo o espectro, nem a seriedade dos danos que ela poderá causar. Mas os danos e a probabilidade de sua ocorrência parecem ser muito baixos.
Com relação aos antivacinas que acreditam, por exemplo, que a vacina contra a covid contém um chip, a única ação que pode ser tomada para o bem da saúde pública é torná-la obrigatória. Foi dessa forma que se conseguiu erradicar a pólio na maior parte do mundo, mesmo com a existência dos opositores à vacina.
BBC News Mundo – Mas torná-la obrigatória pode fazer com que alguém diga que a ciência está sendo usada com propósitos políticos ou ideológicos…
Gordin – Tenho certeza de que, se o Estado impuser uma vacina obrigatória, alguém dirá isso. Mas não se trata de ideologia. O Estado já obriga tantas coisas e já existem vacinas que são obrigatórias.
E o Estado faz todo tipo de afirmações científicas. Não é permitido o ensino do criacionismo nas escolas, por exemplo, nem a pesquisa de clonagem de seres humanos. Ou seja, o Estado já interveio muitas vezes em disputas científicas e procura fazer isso segundo o consenso científico.
BBC News Mundo – As pessoas que adotam as pseudociências o fazem com base no ceticismo, que é exatamente um dos valores fundamentais da ciência. É um paradoxo, não?
Gordin – Este é um dos motivos por que acredito que não haja uma linha divisória clara entre a ciência e a pseudociência. O ceticismo é uma ferramenta que todos nós utilizamos. A questão é sobre qual tipo de assuntos você é cético e o que pode convencê-lo de um fato específico.
No século 19, havia um grande debate se os átomos realmente existiam ou não. Hoje, praticamente nenhum cientista duvida da sua existência. É assim que a ciência funciona. O foco do ceticismo se move de um lado para outro com o passar do tempo. Quando esse ceticismo se dirige a assuntos que já foram aceitos, às vezes ocorrem problemas, mas há ocasiões em que isso é necessário.
A essência da teoria da relatividade de Einstein é que o éter — a substância através da qual as ondas de luz supostamente viajavam — não existe. Para isso, Einstein concentrou seu ceticismo em um postulado fundamental, mas o fez dizendo que poderiam ser preservados muitos outros conhecimentos que já eram considerados estabelecidos.
Portanto, o ceticismo deve ter um propósito. Se você for cético pelo simples fato de sê-lo, este é um processo que não produz avanços.
O ceticismo é um dos princípios básicos da ciência
BBC News Mundo – É possível que, no futuro, o que hoje consideramos ciência seja descartado como pseudociência?
Gordin – No futuro, haverá muitas doutrinas que serão consideradas pseudociências, simplesmente porque existem muitas coisas que ainda não entendemos.
Existem muitas coisas que não entendemos sobre o cérebro ou o meio ambiente. No futuro, as pessoas olharão para muitas teorias e dirão que estão erradas.
Não é suficiente que uma teoria seja incorreta para que seja considerada pseudociência. É necessário que existam pessoas que acreditem que ela é correta, mesmo que o consenso afirme que se trata de um equívoco e que as instituições científicas considerem que, por alguma razão, ela é perigosa.
Many people think that mathematics is a human invention. To this way of thinking, mathematics is like a language: it may describe real things in the world, but it doesn’t “exist” outside the minds of the people who use it.
But the Pythagorean school of thought in ancient Greece held a different view. Its proponents believed reality is fundamentally mathematical.
More than 2,000 years later, philosophers and physicists are starting to take this idea seriously.
As I argue in a new paper, mathematics is an essential component of nature that gives structure to the physical world.
Honeybees and hexagons
Bees in hives produce hexagonal honeycomb. Why?
According to the “honeycomb conjecture” in mathematics, hexagons are the most efficient shape for tiling the plane. If you want to fully cover a surface using tiles of a uniform shape and size, while keeping the total length of the perimeter to a minimum, hexagons are the shape to use.
The hexagonal pattern of honeycomb is the most efficient way to cover a space in identical tiles. Sam Baron, Author provided
Charles Darwin reasoned that bees have evolved to use this shape because it produces the largest cells to store honey for the smallest input of energy to produce wax.
The honeycomb conjecture was first proposed in ancient times, but was only proved in 1999 by mathematician Thomas Hales.
Cicadas and prime numbers
Here’s another example. There are two subspecies of North American periodical cicadas that live most of their lives in the ground. Then, every 13 or 17 years (depending on the subspecies), the cicadas emerge in great swarms for a period of around two weeks.
Why is it 13 and 17 years? Why not 12 and 14? Or 16 and 18?
One explanation appeals to the fact that 13 and 17 are prime numbers.
Some cicadas have evolved to emerge from the ground at intervals of a prime number of years, possibly to avoid predators with life cycles of different lengths. Michael Kropiewnicki / Pixels
Imagine the cicadas have a range of predators that also spend most of their lives in the ground. The cicadas need to come out of the ground when their predators are lying dormant.
Suppose there are predators with life cycles of 2, 3, 4, 5, 6, 7, 8 and 9 years. What is the best way to avoid them all?
Well, compare a 13-year life cycle and a 12-year life cycle. When a cicada with a 12-year life cycle comes out of the ground, the 2-year, 3-year and 4-year predators will also be out of the ground, because 2, 3 and 4 all divide evenly into 12.
When a cicada with a 13-year life cycle comes out of the ground, none of its predators will be out of the ground, because none of 2, 3, 4, 5, 6, 7, 8 or 9 divides evenly into 13. The same is true for 17.
P1–P9 represent cycling predators. The number-line represents years. The highlighted gaps show how 13 and 17-year cicadas manage to avoid their predators. Sam Baron, Author provided
Once we start looking, it is easy to find other examples. From the shape of soap films, to gear design in engines, to the location and size of the gaps in the rings of Saturn, mathematics is everywhere.
If mathematics explains so many things we see around us, then it is unlikely that mathematics is something we’ve created. The alternative is that mathematical facts are discovered: not just by humans, but by insects, soap bubbles, combustion engines and planets.
What did Plato think?
But if we are discovering something, what is it?
The ancient Greek philosopher Plato had an answer. He thought mathematics describes objects that really exist.
For Plato, these objects included numbers and geometric shapes. Today, we might add more complicated mathematical objects such as groups, categories, functions, fields and rings to the list.
For Plato, numbers existed in a realm separate from the physical world. Geralt / Pixabay
Plato also maintained that mathematical objects exist outside of space and time. But such a view only deepens the mystery of how mathematics explains anything.
Explanation involves showing how one thing in the world depends on another. If mathematical objects exist in a realm apart from the world we live in, they don’t seem capable of relating to anything physical.
Enter Pythagoreanism
The ancient Pythagoreans agreed with Plato that mathematics describes a world of objects. But, unlike Plato, they didn’t think mathematical objects exist beyond space and time.
Instead, they believed physical reality is made of mathematical objects in the same way matter is made of atoms.
If reality is made of mathematical objects, it’s easy to see how mathematics might play a role in explaining the world around us.
Pythagorean pie: the world is made of mathematics plus matter. Sam Baron, Author provided
In the past decade, two physicists have mounted significant defences of the Pythagorean position: Swedish-US cosmologist Max Tegmark and Australian physicist-philosopher Jane McDonnell.
Tegmark argues reality just is one big mathematical object. If that seems weird, think about the idea that reality is a simulation. A simulation is a computer program, which is a kind of mathematical object.
McDonnell’s view is more radical. She thinks reality is made of mathematical objects and minds. Mathematics is how the Universe, which is conscious, comes to know itself.
I defend a different view: the world has two parts, mathematics and matter. Mathematics gives matter its form, and matter gives mathematics its substance.
Mathematical objects provide a structural framework for the physical world.
The future of mathematics
It makes sense that Pythagoreanism is being rediscovered in physics.
In the past century physics has become more and more mathematical, turning to seemingly abstract fields of inquiry such as group theory and differential geometry in an effort to explain the physical world.
As the boundary between physics and mathematics blurs, it becomes harder to say which parts of the world are physical and which are mathematical.
But it is strange that Pythagoreanism has been neglected by philosophers for so long.
I believe that is about to change. The time has arrived for a Pythagorean revolution, one that promises to radically alter our understanding of reality.
Is mathematics real? A viral TikTok video raises a legitimate question with exciting answers (The Conversation)
Daniel Mansfield – August 31, 2020 1.41am EDT
While filming herself getting ready for work recently, TikTok user @gracie.ham reached deep into the ancient foundations of mathematics and found an absolute gem of a question:
How could someone come up with a concept like algebra?
She also asked what the ancient Greek philosopher Pythagoras might have used mathematics for, and other questions that revolve around the age-old conundrum of whether mathematics is “real” or something humans just made up.
Many responded negatively to the post, but others — including mathematicians like me — found the questions quite insightful.
Is mathematics real?
Philosophers and mathematicians have been arguing over this for centuries. Some believe mathematics is universal; others consider it only as real as anything else humans have invented.
Thanks to @gracie.ham, Twitter users have now vigorously joined the debate.
For me, part of the answer lies in history.
From one perspective, mathematics is a universal language used to describe the world around us. For instance, two apples plus three apples is always five apples, regardless of your point of view.
But mathematics is also a language used by humans, so it is not independent of culture. History shows us that different cultures had their own understanding of mathematics.
Unfortunately, most of this ancient understanding is now lost. In just about every ancient culture, a few scattered texts are all that remain of their scientific knowledge.
However, there is one ancient culture that left behind an absolute abundance of texts.
Babylonian algebra
Buried in the deserts of modern Iraq, clay tablets from ancient Babylon have survived intact for about 4,000 years.
These tablets are slowly being translated and what we have learned so far is that the Babylonians were practical people who were highly numerate and knew how to solve sophisticated problems with numbers.
Their arithmetic was different from ours, though. They didn’t use zero or negative numbers. They even mapped out the motion of the planets without using calculus as we do.
Of particular importance for @gracie.ham’s question about the origins of algebra is that they knew that the numbers 3, 4 and 5 correspond to the lengths of the sides and diagonal of a rectangle. They also knew these numbers satisfied the fundamental relation 3² + 4² = 5² that ensures the sides are perpendicular.
No theorems were harmed (or used) in the construction of this rectangle.
The Babylonians did all this without modern algebraic concepts. We would express a more general version of the same idea using Pythagoras’ theorem: any right-angled triangle with sides of length a and b and hypotenuse c satisfies a² + b² = c².
The Babylonian perspective omits algebraic variables, theorems, axioms and proofs not because they were ignorant but because these ideas had not yet developed. In short, these social constructs began more than 1,000 years later, in ancient Greece. The Babylonians happily and productively did mathematics and solved problems without any of these relatively modern notions.
What was it all for?
@gracie.ham also asks how Pythagoras came up with his theorem. The short answer is: he didn’t.
Pythagoras of Samos (c. 570-495 BC) probably heard about the idea we now associate with his name while he was in Egypt. He may have been the person to introduce it to Greece, but we don’t really know.
Pythagoras didn’t use his theorem for anything practical. He was primarily interested in numerology and the mysticism of numbers, rather than the applications of mathematics.
Without modern tools, how do you make right angles just right? Ancient Hindu religious texts give instructions for making a rectangular fire altar using the 3-4-5 configuration with sides of length 3 and 4, and diagonal length 5. These measurements ensure that the altar has right angles in each corner.
In the 19th century, the German mathematician Leopold Kronecker said “God made the integers, all else is the work of man”. I agree with that sentiment, at least for the positive integers — the whole numbers we count with — because the Babylonians didn’t believe in zero or negative numbers.
Mathematics has been happening for a very, very long time. Long before ancient Greece and Pythagoras.
Is it real? Most cultures agree about some basics, like the positive integers and the 3-4-5 right triangle. Just about everything else in mathematics is determined by the society in which you live.
Sandra Harding: “Becoming an Accidental Ontologist: Overcoming Logical Positivism’s Antipathy to Metaphysics.” organized by Global Epistemologies and Ontologies (GEOS) February 11 2021, 17.00 – 18.30 (CET)
Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.”
Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe.
But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities.
So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve.
But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure.
With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing.
We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him.
Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix.
The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.”
Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor.
Then, one day, strange lights appear in a corner of the ceiling.
Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines.
Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order.
Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor.
In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor.
As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it?
The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history.
Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description.
From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry.
But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone.
Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting.
Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built.
Regaining that old feeling of disorientation now requires some extra steps.
As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe.
Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians.
Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes.
Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.”
Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future.
The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.”
Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave.
The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.”
But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes.
Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes.
If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here.
Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out.
Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise.
Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care.
Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive.
This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal:
I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me.
The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics.
Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection.
Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.”
Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes.
When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles.
Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes.
For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.”
Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well.
As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely.
There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood.
Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements.
David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124.
Alex Wong / Chet Strange/ Sarah Silbiger / Bloomberg / Getty / The Atlantic
When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.
One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.
The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.
This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.
Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.
The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.
Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.
The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.
Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.
Risk Compensation
One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.
The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.
But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefitsoverwhelmthe ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predictedthat wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmedexactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.
Rules in Place of Mechanisms and Intuitions
Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.
In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.
All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.
Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.
Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.
After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.
Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.
Outdoors? It’s the opposite.
There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.
And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.
We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.
Scolding and Shaming
Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.
Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?
On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.
While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditionsnecessary for more people to keep themselves safe.
And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.
Harm Reduction
Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.
As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””
Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.
Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.
And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.
The Balance Between Knowledge And Action
Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.
Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.
Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.
And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.
At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.
Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.
Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”
Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.
As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.
Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.
These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.
And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.
Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.
Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.
So why isn’t this story more widely appreciated?
Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.
Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”
So we didn’t get our initial vaccine jubilation.
But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.
People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.
Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.
As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.
And yet, from the beginning, a good chunk of the public-facing messaging and news articles implied or claimed that vaccines won’t protect you against infecting other people or that we didn’t know if they would, when both were false. I found myself trying to convince people in my own social network that vaccines weren’t useless against transmission, and being bombarded on social media with claims that they were.
What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.
A year into the pandemic, we’re still repeating the same mistakes.
The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.
Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.
An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.
You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.
There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.
And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.
The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.
Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.
The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.
Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associatedwith higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.
Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.
For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.
New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”
Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.
Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.
But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.
Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.
Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.
And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.
But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.
Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.
Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.
We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.
Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”
Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.
Hoje eu vou dar uma de filósofo chato e preciosista. Tornou-se um lugar-comum afirmar que Bolsonaro age contra a ciência e que suas atitudes diante da pandemia de Covid -19 são absurdas. Concordo que são absurdas, mas receio que não seja tão simples carimbá-las como anticientíficas.
Não me entendam mal, sou fã da ciência. É a ela que devemos quase todos os desenvolvimentos que tornaram a existência humana menos miserável nos últimos séculos. Mas, se quisermos usar os conceitos com algum rigor, a ciência nunca nos diz como devemos atuar.
Quem chamou a atenção para o problema foi David Hume (1711-1776). Para o filósofo, existe uma diferença lógica fundamental entre proposições descritivas, que são as que a ciência nos dá, e proposições prescritivas ou normativas, que são as que se traduzem em decisões de como agir. Nós nunca podemos extrair as segundas diretamente das primeiras. Esse passo necessariamente envolve valores, que não são do domínio da ciência, mas da ética.
Isso significa que a ciência só vai até certo ponto. Ela nos esclarece sobre o comportamento de vírus novos em populações suscetíveis, alerta para a força avassaladora da curva exponencial e vai nos municiando com os parâmetros epidemiológicos do Sars-Cov-2, sobre os quais ainda paira muita incerteza. O que fazemos com essas informações, porém, já não é da alçada da ciência.
Muitas vezes, os cenários traçados pelos especialistas são tão desequilibrados que não deixam margem a dúvida. A escolha sobre o que fazer se torna simples aplicação do bom senso. É o caso da adoção do isolamento social nesta primeira fase da epidemia. Em outras tantas, porém, sobrepõem-se camadas adicionais de complexidade, que precisamos sopesar à luz de valores.
O ponto central é que nossas decisões devem ser informadas pela ciência, mas são inapelavelmente determinadas pela ética —ou pela falta dela.
Donna Haraway in her home in Santa Cruz. A still from Donna Haraway: Story Telling for Earthly Survival, a film by Fabrizio Terranova.
The history of philosophy is also a story about real estate.
Driving into Santa Cruz to visit Donna Haraway, we can’t help feeling that we were born too late. The metal sculpture of a donkey standing on Haraway’s front porch, the dogs that scramble to her front door barking when we ring the bell, and the big black rooster strutting in the coop out back — the entire setting evokes an era of freedom and creativity that postwar wealth made possible in Northern California.
Here was a counterculture whose language and sensibility the tech industry sometimes adopts, but whose practitioners it has mostly priced out. Haraway, who came to the University of Santa Cruz in 1980 to take up the first tenured professorship in feminist theory in the US, still conveys the sense of a wide‑open world.
Haraway was part of an influential cohort of feminist scholars who trained as scientists before turning to the philosophy of science in order to investigate how beliefs about gender shaped the production of knowledge about nature. Her most famous text remains “A Cyborg Manifesto,” published in 1985. It began with an assignment on feminist strategy for the Socialist Review after the election of Ronald Reagan and grew into an oracular meditation on how cybernetics and digitization had changed what it meant to be male or female — or, really, any kind of person. It gained such a cult following that Hari Kunzru, profiling her for Wired years later, wrote: “To boho twentysomethings, her name has the kind of cachet usually reserved for techno acts or new phenethylamines.”
The cyborg vision of gender as changing and changeable was radically new. Her map of how information technology linked people around the world into new chains of affiliation, exploitation, and solidarity feels prescient at a time when an Instagram influencer in Berlin can line the pockets of Silicon Valley executives by using a phone assembled in China that contains cobalt mined in Congo to access a platform moderated by Filipinas.
Haraway’s other most influential text may be an essay that appeared a few years later, on what she called “situated knowledges.” The idea, developed in conversation with feminist philosophers and activists such as Nancy Hartsock, concerns how truth is made. Concrete practices of particular people make truth, Haraway argued. The scientists in a laboratory don’t simply observe or conduct experiments on a cell, for instance, but co-create what a cell is by seeing, measuring, naming, and manipulating it. Ideas like these have a long history in American pragmatism. But they became politically explosive during the so-called Science Wars of the 1990s — a series of public debates among “scientific realists” and “postmodernists” with echoes in controversies about bias and objectivity in academia today.
Haraway’s more recent work has turned to human-animal relations and the climate crisis. She is a capacious yes, and thinker, the kind of leftist feminist who believes that the best thinking is done collectively. She is constantly citing other people, including graduate students, and giving credit to them. A recent documentary about her life and work by the Italian filmmaker Fabrizio Terranova, Story Telling for Earthly Survival, captures this sense of commitment, as well as her extraordinary intellectual agility and inventiveness.
At her home in Santa Cruz, we talked about her memories of the Science Wars and how they speak to our current “post-truth” moment, her views on contemporary climate activism and the Green New Deal, and why play is essential for politics.
Let’s begin at the beginning. Can you tell us a little bit about your childhood?
I grew up in Denver, in the kind of white, middle-class neighborhood where people had gotten mortgages to build housing after the war. My father was a sportswriter. When I was eleven or twelve years old, I probably saw seventy baseball games a year. I learned to score as I learned to read.
My father never really wanted to do the editorials or the critical pieces exposing the industry’s financial corruption or what have you. He wanted to write game stories and he had a wonderful way with language. He was in no way a scholar — in fact he was in no way an intellectual — but he loved to tell stories and write them. I think I was interested in that as well — in words and the sensuality of words.
The other giant area of childhood storytelling was Catholicism. I was way too pious a little girl, completely inside of the colors and the rituals and the stories of saints and the rest of it. I ate and drank a sensual Catholicism that I think was rare in my generation. Very not Protestant. It was quirky then; it’s quirky now. And it shaped me.
How so?
One of the ways that it shaped me was through my love of biology as a materialist, sensual, fleshly being in the world as well as a knowledge-seeking apparatus. It shaped me in my sense that I saw biology simultaneously as a discourse and profoundly of the world. The Word and the flesh.
Many of my colleagues in the History of Consciousness department, which comes much later in the story, were deeply engaged with Roland Barthes and with that kind of semiotics. I was very unconvinced and alienated from those thinkers because they were so profoundly Protestant in their secularized versions. They were so profoundly committed to the disjunction between the signifier and signified — so committed to a doctrine of the sign that is anti-Catholic, not just non-Catholic. The secularized sacramentalism that just drips from my work is against the doctrine of the sign that I felt was the orthodoxy in History of Consciousness. So Catholicism offered an alternative structure of affect. It was both profoundly theoretical and really intimate.
Did you start studying biology as an undergraduate?
I got a scholarship that allowed me to go to Colorado College. It was a really good liberal arts school. I was there from 1962 to 1966 and I triple majored in philosophy and literature and zoology, which I regarded as branches of the same subject. They never cleanly separated. Then I got a Fulbright to go to Paris. Then I went to Yale to study cell, molecular, and developmental biology.
Did you get into politics at Yale? Or were you already political when you arrived?
The politics came before that — probably from my Colorado College days, which were influenced by the civil rights movement. But it was at Yale that several things converged. I arrived in the fall of 1967, and a lot was happening.
New Haven in those years was full of very active politics. There was the antiwar movement. There was anti-chemical and anti-biological warfare activism among both the faculty and the graduate students in the science departments. There was Science for the People [a left-wing science organization] and the arrival of that wave of the women’s movement. My lover, Jaye Miller, who became my first husband, was gay, and gay liberation was just then emerging. There were ongoing anti-racist struggles: the Black Panther Party was very active in New Haven.
Jaye and I were part of a commune where one of the members and her lover were Black Panthers. Gayle was a welfare rights activist and the mother of a young child, and her lover was named Sylvester. We had gotten the house for the commune from the university at a very low rent because we were officially an “experiment in Christian living.” It was a very interesting group of people! There was a five-year-old kid who lived in the commune, and he idolized Sylvester. He would clomp up the back stairs wearing these little combat boots yelling, “Power to the people! Power! Power!” It made our white downstairs neighbors nervous. They didn’t much like us anyway. It was very funny.
Did this political climate influence your doctoral research at Yale?
I ended up writing on the ways that metaphors shape experimental practice in the laboratory. I was writing about the experience of the coming-into-being of organisms in the situated interactions of the laboratory. In a profound sense, such organisms are made but not made up. It’s not a relativist position at all; it’s a materialist position. It’s about what I later learned to call “situated knowledges.” It was in the doing of biology that this became more and more evident.
How did these ideas go over with your labmates and colleagues?
It was never a friendly way of talking for my biology colleagues, who always felt that this verged way too far in the direction of relativism.
It’s not that the words I was using were hard. It’s that the ideas were received with great suspicion. And I think that goes back to our discussion a few minutes ago about semiotics: I was trying to insist that the gapping of the signifier and the signified does not really determine what’s going on.
But let’s face it: I was never very good in the lab! My lab work was appalling. Everything I ever touched died or got infected. I did not have good hands, and I didn’t have good passion. I was always more interested in the discourse, if you will.
But you found a supervisor who was open to that?
Yes, Evelyn Hutchinson. He was an ecologist and a man of letters and a man who had had a long history of making space for heterodox women. And I was only a tiny bit heterodox. Other women he had given space to were way more out there than me. Evelyn was also the one who got us our house for our “experiment in Christian living.”
God bless. What happened after Yale?
Jaye got a job at the University of Hawaii teaching world history and I went as this funny thing called a “faculty wife.” I had an odd ontological status. I got a job there in the general science department. Jaye and I were also faculty advisers for something called New College, which was an experimental liberal-arts part of the university that lasted for several years.
It was a good experience. Jaye and I got a divorce in that period but never really quite separated because we couldn’t figure out who got the camera and who got the sewing machine. That was the full extent of our property in those days. We were both part of a commune in Honolulu.
Then one night, Jaye’s boss in the history department insisted that we go out drinking with him, at which point he attacked us both sexually and personally in a drunken, homophobic, and misogynist rant. And very shortly after that, Jaye was denied tenure. Both of us felt stunned and hurt. So I applied for a job in the History of Science department at Johns Hopkins, and Jaye applied for a job at the University of Texas in Houston.
Baltimore and the Thickness of Worlding
How was Hopkins?
History of Science was not a field I knew anything about, and the people who hired me knew that perfectly well. Therefore they assigned me to teach the incoming graduate seminar: Introduction to the History of Science. It was a good way to learn it!
Hopkins was also where I met my current partner, Rusten. He was a graduate student in the History of Science department, where I was a baby assistant professor. (Today I would be fired and sued for sexual harassment — but that’s a whole other conversation.)
Who were some of the other people who became important to you at Hopkins?
[The feminist philosopher] Nancy Hartsock and I shaped each other quite a bit in those years. We were part of the Marxist feminist scene in Baltimore. We played squash a lot — squash was a really intense part of our friendship. Her lover was a Marxist lover of Lenin; he gave lectures in town.
In the mid-to-late 1970s, Nancy and I started the women’s studies program at Hopkins together. At the time, she was doing her article that became her book on feminist materialism, [Money, Sex, and Power: Toward a Feminist Historical Materialism]. It was very formative for me.
Those were also the years that Nancy and Sandra Harding and Patricia Hill Collins and Dorothy Smith were inventing feminist standpoint theory. I think all of us were already reaching toward those ideas, which we then consolidated as theoretical proposals to a larger community. The process was both individual and collective. We were putting these ideas together out of our struggles with our own work. You write in a closed room while tearing your hair out of your head — it was individual in that sense. But then it clicks, and the words come, and you consolidate theoretical proposals that you bring to your community. In that sense, it was a profoundly collective way of thinking with each other, and within the intensities of the social movements of the late 1960s and early 1970s.
The ideas that you and other feminist philosophers were developing challenged many dominant assumptions about what truth is, where it comes from, and how it functions. More recently, in the era of Trump, we are often told we are living in a time of “post-truth” — and some critics have blamed philosophers like yourselves for creating the environment of “relativism” in which “post-truth” flourishes. How do you respond to that?
Our view was never that truth is just a question of which perspective you see it from. “Truth is perspectival” was never our position. We were against that. Feminist standpoint theory was always anti-perspectival. So was the Cyborg Manifesto, situated knowledges, [the philosopher] Bruno Latour’s notions of actor-network theory, and so on.
“Post-truth” gives up on materialism. It gives up on what I’ve called semiotic materialism: the idea that materialism is always situated meaning-making and never simply representation. These are not questions of perspective. They are questions of worlding and all of the thickness of that. Discourse is not just ideas and language. Discourse is bodily. It’s not embodied, as if it were stuck in a body. It’s bodily and it’s bodying, it’s worlding. This is the opposite of post-truth. This is about getting a grip on how strong knowledge claims are not just possible but necessary — worth living and dying for.
When you, Latour, and others were criticized for “relativism,” particularly during the so-called Science Wars of the 1990s, was that how you responded? And could your critics understand your response?
Bruno and I were at a conference together in Brazil once. Which reminds me: If people want to criticize us, it ought to be for the amount of jet fuel involved in making and spreading these ideas! Not for leading the way to post-truth. We’re guilty on the carbon footprint issue, and Skyping doesn’t help, because I know what the carbon footprint of the cloud is.
Anyhow. We were at this conference in Brazil. It was a bunch of primate field biologists, plus me and Bruno. And Stephen Glickman, a really cool biologist, a man we both love, who taught at UC Berkeley for years and studied hyenas, took us aside privately. He said, “Now, I don’t want to embarrass you. But do you believe in reality?”
We were both kind of shocked by the question. First, we were shocked that it was a question of belief, which is a Protestant question. A confessional question. The idea that reality is a question of belief is a barely secularized legacy of the religious wars. In fact, reality is a matter of worlding and inhabiting. It is a matter of testing the holding-ness of things. Do things hold or not?
Take evolution. The notion that you would or would not “believe” in evolution already gives away the game. If you say, “Of course I believe in evolution,” you have lost, because you have entered the semiotics of representationalism — and post-truth, frankly. You have entered an arena where these are all just matters of internal conviction and have nothing to do with the world. You have left the domain of worlding.
The Science Warriors who attacked us during the Science Wars were determined to paint us as social constructionists — that all truth is purely socially constructed. And I think we walked into that. We invited those misreadings in a range of ways. We could have been more careful about listening and engaging more slowly. It was all too easy to read us in the way the Science Warriors did. Then the right wing took the Science Wars and ran with it, which eventually helped nourish the whole fake-news discourse.
Your opponents in the Science Wars championed “objectivity” over what they considered your “relativism.” Were you trying to stake out a position between those two terms? Or did you reject the idea that either of those terms even had a stable meaning?
Both terms inhabit the same ontological and epistemological frame — a frame that my colleagues and I have tried to make hard to inhabit. Sandra Harding insisted on “strong objectivity,” and my idiom was “situated knowledges.” We have tried to deauthorize the kind of possessive individualism that sees the world as units plus relations. You take the units, you mix them up with relations, you come up with results. Units plus relations equal the world.
People like me say, “No thank you: it’s relationality all the way down.” You don’t have units plus relations. You just have relations. You have worlding. The whole story is about gerunds — worlding, bodying, everything-ing. The layers are inherited from other layers, temporalities, scales of time and space, which don’t nest neatly but have oddly configured geometries. Nothing starts from scratch. But the play — I think the concept of play is incredibly important in all of this — proposes something new, whether it’s the play of a couple of dogs or the play of scientists in the field.
This is not about the opposition between objectivity and relativism. It’s about the thickness of worlding. It’s also about being of and for some worlds and not others; it’s about materialist commitment in many senses.
To this day I know only one or two scientists who like talking this way. And there are good reasons why scientists remain very wary of this kind of language. I belong to the Defend Science movement and in most public circumstances I will speak softly about my own ontological and epistemological commitments. I will use representational language. I will defend less-than-strong objectivity because I think we have to, situationally.
Is that bad faith? Not exactly. It’s related to [what the postcolonial theorist Gayatri Chakravorty Spivak has called] “strategic essentialism.” There is a strategic use to speaking the same idiom as the people that you are sharing the room with. You craft a good-enough idiom so you can work on something together. I won’t always insist on what I think might be a stronger apparatus. I go with what we can make happen in the room together. And then we go further tomorrow.
In the struggles around climate change, for example, you have to join with your allies to block the cynical, well-funded, exterminationist machine that is rampant on the earth. I think my colleagues and I are doing that. We have not shut up, or given up on the apparatus that we developed. But one can foreground and background what is most salient depending on the historical conjuncture.
Santa Cruz and Cyborgs
To return to your own biography, tell us a bit about how and why you left Hopkins for Santa Cruz.
Nancy Hartsock and I applied for a feminist theory job in the History of Consciousness department at UC Santa Cruz together. We wanted to share it. Everybody assumed we were lovers, which we weren’t, ever. We were told by the search committee that they couldn’t consider a joint application because they had just gotten this job okayed and it was the first tenured position in feminist theory in the country. They didn’t want to do anything further to jeopardize it. Nancy ended up deciding that she wanted to stay in Baltimore anyway, so I applied solo and got the job. And I was fired from Hopkins and hired by Santa Cruz in the same week — and for exactly the same papers.
What were the papers?
The long one was called “Signs of Dominance.” It was from a Marxist feminist perspective, and it was regarded as too political. Even though it appeared in a major journal, the person in charge of my personnel case at Hopkins told me to white it out from my CV.
The other one was a short piece on [the poet and novelist] Marge Piercy and [feminist theorist] Shulamith Firestone in Women: a Journal of Liberation. And I was told to white that out, too. Those two papers embarrassed my colleagues and they were quite explicit about it, which was kind of amazing. Fortunately, the people at History of Consciousness loved those same papers, and the set of commitments that went with them.
You arrived in Santa Cruz in 1980, and it was there that you wrote the Cyborg Manifesto. Tell us a bit about its origins.
It had a very particular birth. There was a journal called the Socialist Review, which had formerly been called Socialist Revolution. Jeff Escoffier, one of the editors, asked five of us to write no more than five pages each on Marxist feminism, and what future we anticipated for it.
This was just after the election of Ronald Reagan. The future we anticipated was a hard right turn. It was the definitive end of the 1960s. Around the same time, Jeff asked me if I would represent Socialist Review at a conference of New and Old Lefts in Cavtat in Yugoslavia [now Croatia]. I said yes, and I wrote a little paper on reproductive biotechnology. A bunch of us descended on Cavtat, and there were relatively few women. So we rather quickly found one another and formed alliances with the women staff who were doing all of the reproductive labor, taking care of us. We ended up setting aside our papers and pronouncing on various feminist topics. It was really fun and quite exciting.
Out of that experience, I came back to Santa Cruz and wrote the Cyborg Manifesto. It turned out not to be five pages, but a whole coming to terms with what had happened to me in those years from 1980 to the time it came out in 1985.
The manifesto ended up focusing a lot on cybernetics and networking technologies. Did this reflect the influence of nearby Silicon Valley? Were you close with people working in those fields?
It’s part of the air you breathe here. But the real tech alliances in my life come from my partner Rusten and his friends and colleagues, because he worked as a freelance software designer. He did contract work for Hewlett Packard for years. He had a long history in that world: when he was only fourteen, he got a job programming on punch cards for companies in Seattle.
The Cyborg Manifesto was the first paper I ever wrote on a computer screen. We had an old HP-86. And I printed it on one of those daisy-wheel printers. One I could never get rid of, and nobody ever wanted. It ended up in some dump, God help us all.
The Cyborg Manifesto had such a tremendous impact, and continues to. What did you make of its reception?
People read it as they do. Sometimes I find it interesting. But sometimes I just want to jump into a foxhole and pull the cover over me.
In the manifesto, you distinguish yourself from two other socialist feminist positions. The first is the techno-optimist position that embraces aggressive technological interventions in order to modify human biology. This is often associated with Shulamith Firestone’s book The Dialectic of Sex (1970), and in particular her proposal for “artificial wombs” that could reproduce humans outside of a woman’s body.
Yes, although Firestone gets slotted into a quite narrow, blissed-out techno-bunny role, as if all her work was about reproduction without wombs. She is remembered for one technological proposal, but her critique of the historical materialist conditions of mothering and reproduction was very deep and broad.
You also make some criticisms of the ideas associated with Italian autonomist feminists and the Wages for Housework campaign. You suggest that they overextend the category of “labor.”
Wages for Housework was very important. And I’m always in favor of working by addition not subtraction. I’m always in favor of enlarging the litter. Let’s watch the attachments and detachments, the compositions and decompositions, as the litter proliferates. Labor is an important category with a strong history, and Wages for Housework enlarged it.
But in thinkers with Marxist roots, there’s also a tendency to make the category of labor do too much work. A great deal of what goes on needs to be thickly described with categories other than labor — or in interesting kinds of entanglement with labor.
What other categories would you want to add?
Play is one. Labor is so tied to functionality, whereas play is a category of non-functionality.
Play captures a lot of what goes on in the world. There is a kind of raw opportunism in biology and chemistry, where things work stochastically to form emergent systematicities. It’s not a matter of direct functionality. We need to develop practices for thinking about those forms of activity that are not caught by functionality, those which propose the possible-but-not-yet, or that which is not-yet but still open.
It seems to me that our politics these days require us to give each other the heart to do just that. To figure out how, with each other, we can open up possibilities for what can still be. And we can’t do that in in a negative mood. We can’t do that if we do nothing but critique. We need critique; we absolutely need it. But it’s not going to open up the sense of what might yet be. It’s not going to open up the sense of that which is not yet possible but profoundly needed.
The established disorder of our present era is not necessary. It exists. But it’s not necessary.
Playing Against Double Death
What might some of those practices for opening up new possibilities look like?
Through playful engagement with each other, we get a hint about what can still be and learn how to make it stronger. We see that in all occupations. Historically, the Greenham Common women were fabulous at this. [Eds.: The Greenham Common Women’s Peace Camp was a series of protests against nuclear weapons at a Royal Air Force base in England, beginning in 1981.] More recently, you saw it with the Dakota Access Pipeline occupation.
The degree to which people in these occupations play is a crucial part of how they generate a new political imagination, which in turn points to the kind of work that needs to be done. They open up the imagination of something that is not what [the ethnographer] Deborah Bird Rose calls “double death” — extermination, extraction, genocide.
Now, we are facing a world with all three of those things. We are facing the production of systemic homelessness. The way that flowers aren’t blooming at the right time, and so insects can’t feed their babies and can’t travel because the timing is all screwed up, is a kind of forced homelessness. It’s a kind of forced migration, in time and space.
This is also happening in the human world in spades. In regions like the Middle East and Central America, we are seeing forced displacement, some of which is climate migration. The drought in the Northern Triangle countries of Central America — Honduras, Guatemala, El Salvador — is driving people off their land.
So it’s not a humanist question. It’s a multi-kind and multi-species question.
In the Cyborg Manifesto, you use the ideas of “the homework economy” and the “integrated circuit” to explore the various ways that information technology was restructuring labor in the early 1980s to be more precarious, more global, and more feminized. Do climate change and the ecological catastrophes you’re describing change how you think about those forces?
Yes and no. The theories that I developed in that period emerged from a particular historical conjuncture. If I were mapping the integrated circuit today, it would have different parameters than the map that I made in the early 1980s. And surely the questions of immigration, exterminism, and extractivism would have to be deeply engaged. The problem of rebuilding place-based lives would have to get more attention.
The Cyborg Manifesto was written within the context of the hard-right turn of the 1980s. But the hard-right turn was one thing; the hard-fascist turn of the late 2010s is another. It’s not the same as Reagan. The presidents of Colombia, Hungary, Brazil, Egypt, India, the United States — we are looking at a new fascist capitalism, which requires reworking the ideas of the early 1980s for them to make sense.
So there are continuities between now and the map I made then, a lot of continuities. But there are also some pretty serious inflection points, particularly when it comes to developments in digital technologies that are playing into the new fascism.
Could you say more about those developments?
If the public-private dichotomy was old-fashioned in 1980, by 2019 I don’t even know what to call it. We have to try to rebuild some sense of a public. But how can you rebuild a public in the face of nearly total surveillance? And this surveillance doesn’t even have a single center. There is no eye in the sky.
Then we have the ongoing enclosure of the commons. Capitalism produces new forms of value and then encloses those forms of value — the digital is an especially good example of that. This involves the monetization of practically everything we do. And it’s not like we are ignorant of this dynamic. We know what’s going on. We just don’t have a clue how to get a grip on it.
One attempt to update the ideas of the Cyborg Manifesto has come from the “xenofeminists” of the international collective Laboria Cuboniks. I believe some of them have described themselves as your “disobedient daughters.”
Overstating things, that’s not my feminism.
Why not?
I’m not very interested in those discussions, frankly. It’s not what I’m doing. It’s not what makes me vital now. In a moment of ecological urgency, I’m more engaged in questions of multispecies environmental and reproductive justice. Those questions certainly involve issues of digital and robotic and machine cultures, but they aren’t at the center of my attention.
What is at the center of my attention are land and water sovereignty struggles, such as those over the Dakota Access Pipeline, over coal mining on the Black Mesa plateau, over extractionism everywhere. My attention is centered on the extermination and extinction crises happening at a worldwide level, on human and nonhuman displacement and homelessness. That’s where my energies are. My feminism is in these other places and corridors.
Do you still think the cyborg is still a useful figure?
I think so. The cyborg has turned out to be rather deathless. Cyborgs keep reappearing in my life as well as other people’s lives.
The cyborg remains a wily trickster figure. And, you know, they’re also kind of old-fashioned. They’re hardly up-to-the‑minute. They’re rather klutzy, a bit like R2-D2 or a pacemaker. Maybe the embodied digitality of us now is not especially well captured by the cyborg. So I’m not sure. But, yeah, I think cyborgs are still in the litter. I just think we need a giant bumptious litter whelped by a whole lot of really badass bitches — some of whom are men!
Mourning Without Despair
You mentioned that your current work is more focused on environmental issues. How are you thinking about the role of technology in mitigating or adapting to climate change — or fighting extractivism and extermination?
There is no homogeneous socialist position on this question. I’m very pro-technology, but I belong to a crowd that is quite skeptical of the projects of what we might call the “techno-fix,” in part because of their profound immersion in technocapitalism and their disengagement from communities of practice.
Those communities may need other kinds of technologies than those promised by the techno-fix: different kinds of mortgage instruments, say, or re-engineered water systems. I’m against the kind of techno-fixes that are abstracted from place and tied up with huge amounts of technocapital. This seems to include most geoengineering projects and imaginations.
So when I see massive solar fields and wind farms I feel conflicted, because on the one hand they may be better than fracking in Monterey County — but only maybe. Because I also know where the rare earth minerals required for renewable energy technologies come from and under what conditions. We still aren’t doing the whole supply-chain analysis of our technologies. So I think we have a long way to go in socialist understanding of these matters.
One tendency within socialist thought believes that socialists can simply seize capitalist technology and put it to different purposes — that you take the forces of production, build new relations around them, and you’re done. This approach is also associated with a Promethean, even utopian approach to technology. Socialist techno-utopianism has been around forever, but it has its own adherents today, such as those who advocate for “Fully Automated Luxury Communism.” I wonder how you see that particular lineage of socialist thinking about technology.
I think very few people are that simplistic, actually. In various moments we might make proclamations that come down that way. But for most people, our socialisms, and the approaches with which socialists can ally, are richer and more varied.
When you talk to the Indigenous activists of the Black Mesa Water Coalition, for example, they have a complex sense around solar arrays and coal plants and water engineering and art practices and community movements. They have very rich articulated alliances and separations around all of this.
Socialists aren’t the only ones who have been techno-utopian, of course. A far more prominent and more influential strand of techno-utopianism has come from the figures around the Bay Area counterculture associated with the Whole Earth Catalog, in particular Stewart Brand, who went on to play important intellectual and cultural roles in Silicon Valley.
They are not friends. They are not allies. I’m avoiding calling them enemies because I’m leaving open the possibility of their being able to learn or change, though I’m not optimistic. I think they occupy the position of the “god trick.” [Eds.: The “god trick” is an idea introduced by Haraway that refers to the traditional view of objectivity as a transcendent “gaze from nowhere.”] I think they are blissed out by their own privileged positions and have no idea what their own positionality in the world really is. And I think they cause a lot of harm, both ideologically and technically.
How so?
They get a lot of publicity. They take up a lot of the air in the room.
It’s not that I think they’re horrible people. There should be space for people pushing new technologies. But I don’t see nearly enough attention given to what kinds of technological innovation are really needed to produce viable local and regional energy systems that don’t depend on species-destroying solar farms and wind farms that require giant land grabs in the desert.
The kinds of conversations around technology that I think we need are those among folks who know how to write law and policy, folks who know how to do material science, folks who are interested in architecture and park design, and folks who are involved in land struggles and solidarity movements. I want to see us do much savvier scientific, technological, and political thinking with each other, and I want to see it get press. The Stewart Brand types are never going there.
Do you see clear limitations in their worldviews and their politics?
They remain remarkably humanist in their orientation, in their cognitive apparatus, and in their vision of the world. They also have an almost Peter Pan quality. They never quite grew up. They say, “If it’s broken, fix it.”
This comes from an incapacity to mourn and an incapacity to be finite. I mean that psychoanalytically: an incapacity to understand that there is no status quo ante, to understand that death and loss are real. Only within that understanding is it possible to open up to a kind of vitality that isn’t double death, that isn’t extermination, and which doesn’t yearn for transcendence, yearn for the fix.
There’s not much mourning with the Stewart Brand types. There’s not much felt loss of the already disappeared, the already dead — the disappeared of Argentina, the disappeared of the caravans, the disappeared of the species that will not come back. You can try to do as much resurrection biology as you want to. But any of the biologists who are actually involved in the work are very clear that there is no resurrection.
You have also been critical of the Anthropocene, as a proposed new geological epoch defined by human influence on the earth. Do you see the idea of the Anthropocene as having similar limitations?
I think the Anthropocene framework has been a fertile container for quite a lot, actually. The Anthropocene has turned out to be a rather capacious territory for incorporating people in struggle. There are a lot of interesting collaborations with artists and scientists and activists going on.
The main thing that’s too bad about the term is that it perpetuates the misunderstanding that what has happened is a human species act, as if human beings as a species necessarily exterminate every planet we dare to live on. As if we can’t stop our productive and reproductive excesses.
Extractivism and exterminationism are not human species acts. They come from a situated historical conjuncture of about five hundred years in duration that begins with the invention of the plantation and the subsequent modeling of industrial capitalism. It is a situated historical conjuncture that has had devastating effects even while it has created astonishing wealth.
To define this as a human species act affects the way a lot of scientists think about the Anthropocene. My scientist colleagues and friends really do continue to think of it as something human beings can’t stop doing, even while they understand my historical critique and agree with a lot of it.
It’s a little bit like the relativism versus objectivity problem. The old languages have a deep grip. The situated historical way of thinking is not instinctual for Western science, whose offspring are numerous.
Are there alternatives that you think could work better than the Anthropocene?
There are plenty of other ways of thinking. Take climate change. Now, climate change is a necessary and essential category. But if you go to the circumpolar North as a Southern scientist wanting to collaborate with Indigenous people on climate change — on questions of changes in the sea ice, for example, or changes in the hunting and subsistence base — the limitations of that category will be profound. That’s because it fails to engage with the Indigenous categories that are actually active on the ground.
There is an Inuktitut word, “sila.” In an Anglophone lexicon, “sila” will be translated as “weather.” But in fact, it’s much more complicated. In the circumpolar North, climate change is a concept that collects a lot of stuff that the Southern scientist won’t understand. So the Southern scientist who wants to collaborate on climate change finds it almost impossible to build a contact zone.
Anyway, there are plenty of other ways of thinking about shared contemporary problems. But they require building contact zones between cognitive apparatuses, out of which neither will leave the same as they were before. These are the kinds of encounters that need to be happening more.
A final question. Have you been following the revival of socialism, and socialist feminism, over the past few years?
Yes.
What do you make of it? I mean, socialist feminism is becoming so mainstream that even Harper’s Bazaar is running essays on “emotional labor.”
I’m really pleased! The old lady is happy. I like the resurgence of socialism. For all the horror of Trump, it has released us. A whole lot of things are now being seriously considered, including mass nonviolent social resistance. So I am not in a state of cynicism or despair.
An excerpted version of this interview originally appeared in The Guardian.