For the last 60 years or so, science has been running an experiment on itself. The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.
Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”
This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.
(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)
That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.
Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.
The results are in. It failed.
Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.
Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.
Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.
What went wrong?
Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?
It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.
In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:
Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.
(When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)
The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶
Here’s another way that we can test whether peer review worked: did it actually earn scientists’ trust?
Scientists often say they take peer review very seriously. But people say lots of things they don’t mean, like “It’s great to e-meet you” and “I’ll never leave you, Adam.” If you look at what scientists actually do, it’s clear they don’t think peer review really matters.
First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal. This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.
Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place.
And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”
Instead, scientists tacitly agree that peer review adds nothing, and they make up their minds about scientific work by looking at the methods and results. Sometimes people say the quiet part loud, like Nobel laureate Sydney Brenner:
I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.
It’s easy to imagine how things could be better—my friend Ethan and I wrote a whole paper on it—but that doesn’t mean it’s easy to make things better. My complaints about peer review were a bit like looking at the ~35,000 Americans who die in car crashes every year and saying “people shouldn’t crash their cars so much.” Okay, but how?
Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’tseem to make them any better. Neither does training them. Maybe we can fix some things on the margins, but remember that right now we’re publishing papers that use capital T’s instead of error bars, so we’ve got a long, long way to go.
What if we made peer review way stricter? That might sound great, but it would make lots of other problems with peer review way worse.
For example, you used to be able to write a scientific paper with style. Now, in order to please reviewers, you have to write it like a legal contract. Papers used to begin like, “Help! A mysterious number is persecuting me,” and now they begin like, “Humans have been said, at various times and places, to exist, and even to have several qualities, or dimensions, or things that are true about them, but of course this needs further study (Smergdorf & Blugensnout, 1978; Stikkiwikket, 2002; von Fraud et al., 2018b)”.
This blows. And as a result, nobody actually reads these papers. Some of them are like 100 pages long with another 200 pages of supplemental information, and all of it is written like it hates you and wants you to stop reading immediately. Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he. “Whenever someone tells me they loved my paper,” he said, “I say thank you, even though I know they didn’t read it.” Stricter peer review would mean even more boring papers, which means even fewer people would read them.
Making peer review harsher would also exacerbate the worst problem of all: just knowing that your ideas won’t count for anything unless peer reviewers like them makes you worse at thinking. It’s like being a teenager again: before you do anything, you ask yourself, “BUT WILL PEOPLE THINK I’M COOL?” When getting and keeping a job depends on producing popular ideas, you can get very good at thought-policing yourself into never entertaining anything weird or unpopular at all. That means we end up with fewer revolutionary ideas, and unless you think everything’s pretty much perfect right now, we need revolutionary ideas real bad.
On the off chance you do figure out a way to improve peer review without also making it worse, you can try convincing the nearly 30,000 scientific journals in existence to apply your magical method to the ~4.7 million articles they publish every year. Good luck!
Peer review doesn’t work and there’s probably no way to fix it. But a little bit of vetting is better than none at all, right?
I say: no way.
Imagine you discover that the Food and Drug Administration’s method of “inspecting” beef is just sending some guy (“Gary”) around to sniff the beef and say whether it smells okay or not, and the beef that passes the sniff test gets a sticker that says “INSPECTED BY THE FDA.” You’d be pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he’s going to miss most of the dangerous meat. This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.
That’s what our current system of peer review does, and it’s dangerous. That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?
If you want to sell a bottle of vitamin C pills in America, you have to include a disclaimer that says none of the claims on the bottle have been evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement on every paper: “NOBODY HAS REALLY CHECKED WHETHER THIS PAPER IS TRUE OR NOT. IT MIGHT BE MADE UP, FOR ALL WE KNOW.” That would at least give people the appropriate level of confidence.
Why did peer review seem so reasonable in the first place?
I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.
But science is a strong-link problem: progress depends on the quality of our best work.Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.
If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.
Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.
Nobody was in charge of our peer review experiment, which means nobody has the responsibility of saying when it’s over. Seeing no one else, I guess I’ll do it:
We’re done, everybody! Champagne all around! Great work, and congratulations. We tried peer review and it didn’t work.
Honesty, I’m so relieved. That system sucked! Waiting months just to hear that an editor didn’t think your paper deserved to be reviewed? Reading long walls of text from reviewers who for some reason thought your paper was the source of all evil in the universe? Spending a whole day emailing a journal begging them to let you use the word “years” instead of always abbreviating it to “y” for no reason (this literally happened to me)? We never have to do any of that ever again.
I know we all might be a little disappointed we wasted so much time, but there’s no shame in a failed experiment. Yes, we should have taken peer review for a test run before we made it universal. But that’s okay—it seemed like a good idea at the time, and now we know it wasn’t. That’s science! It will always be important for scientists to comment on each other’s ideas, of course. It’s just this particular way of doing it that didn’t work.
What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.
Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.
Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lotofcomments in particular. So I dunno, I guess that seems like a good way of doing it?
I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.
In a 2018 survey, over half of a sample of Americans reported a psi experience; a 2022 Brazilian survey revealed 70% had a precognitive dream.
Some scientists will not engage with the evidence for psi due to scientism.
The ideology of “scientism” is often associated with science, but leads to a lack of open-mindedness, which is contrary to true science.
Psi phenomena, like telepathy and precognition, are controversial in academia. While a minority of academics (such as me) are open-minded about them, others believe that they are pseudo-scientific and that they can’t possibly exist because they contravene the laws of science.
However, the phenomena are much less controversial to the general public. Surveys show significant levels of belief in psi. A survey of 1200 Americans in 2003 found that over 60% believed in extrasensory perception.1
This high level of belief appears to stem largely from experience. In a 2018 survey, half of a sample of Americans reported they had an experience of feeling “as though you were in touch with someone when they were far away.” Slightly less than half reported an experience of knowing “something about the future that you had no normal way to know” (in other words, precognition). Just over 40% reported that they had received important information through their dreams.2
Interestingly, a 2022 survey of over 1000 Brazilian people found higher levels of such anomalous experiences, with 70% reporting they had a precognitive dream at least once.3 This may imply that such experiences are more likely to be reported in Brazil, perhaps due to a cultural climate of greater openness.
How can we account for the disconnect between the dismissal of psi phenomena by some scientists, and the openness of the general population? Is it that scientists are more educated and rational than other sections of the population, many of whom are gullible to superstition and irrational thinking?
I don’t think it’s as simple as this.
Evidence for Psi
You might be surprised to learn that the evidence for phenomena such as telepathy and precognition is strong. As I point out in my book, Spiritual Science, this evidence has remained significant and robust over a massive range of studies over decades.
In 2018, American Psychologist published an article by Professor Etzel Cardeña which carefully and systemically reviewed the evidence for psi phenomena, examining over 750 discrete studies. Cardeña concluded that there was a very strong case for the existence of psi, writing that the evidence was “comparable to that for established phenomena in psychology and other disciplines.”4
For example, from 1974 to 2018, 117 experiments were reported using the “Ganzfeld” procedure, in which one participant attempts to “send” information about images to another distant person. An overall analysis of the results showed a “hit rate” many millions of times higher than chance. Factors such as selective reporting bias (the so-called “file drawer effect”) and variations in experimental quality could not account for the results. Moreover, independent researchers reported statistically identical results.5
So why do some scientists continue to believe that there is no evidence for psi? In my view, the explanation lies in an ideology that could be called “scientism.”
Scientism is an ideology that is often associated with science. It consists of a number of basic ideas, which are often stated as facts, even though they are just assumptions—e.g., that the world is purely physical in nature, that human consciousness is a product of brain activity, that human beings are biological machines whose behaviour is determined by genes, that anomalous phenomena such as near-death experiences and psi are unreal, and so on.
Adherents to scientism see themselves as defenders of reason. They see themselves as part of a historical “enlightenment project” whose aim is to overcome superstition and irrationality. In particular, they see themselves as opponents of religion.
It’s therefore ironic that scientism has become a quasi-religion in itself. In their desire to spread their ideology, adherents to scientism often behave like religious zealots, demonising unwelcome ideas and disregarding any evidence that doesn’t fit with their worldview. They apply their notion of rationality in an extremist way, dismissing any phenomena outside their belief system as “woo.” Scientifically evidential phenomena such as telepathy and precognition are placed in the same category as creationism and conspiracy theories.
One example was a response to Eztel Cardeña’s American Psychologist article (cited above) by the longstanding skeptics Arthur Reber and James Alcock. Aiming to rebut Cardeña’s claims of the strong evidence for psi, they decided that their best approach was not to actually engage with the evidence, but simply to insist that it couldn’t possibly be valid because psi itself was theoretically impossible. As they wrote, “Claims made by parapsychologists cannot be true … Hence, data that suggest that they can are necessarily flawed and result from weak methodology or improper data analyses.”6
A similar strategy was used by the psychologist Marija Branković in a recent paper in The European Journal of Psychology. After discussing a series of highly successful precognition studies by the researcher Daryl Bem, she dismisses them because three investigators were unable to replicate the findings.7 Branković neglects to mention that there have been 90 other replication attempts with a massively significant overall success rate, exceeding the standard of “decisive evidence” by a factor of 10 million.8
It’s worth considering for a moment whether psi really does contravene the laws of physics (or science), as many adherents to scientism suggest. For me, this is one of the most puzzling claims made by skeptics. Tellingly, the claim is often made by psychologists, whose knowledge of modern science may not be deep.
Anyone with a passing knowledge of some of the theories of modern physics—particularly quantum physics—is aware that reality is much stranger than it appears to common sense. There are many theories that suggest that our common-sense view of linear time may be false. There are many theories that suggest that our world is essentially “non-local,” including phenomena such as “entanglement” and “action at a distance.” I think it would be too much of a stretch to suggest that such theories explain precognition and telepathy, but they certainly allow for their possibility.
A lot of people assume that if you’re a scientist, then you must automatically subscribe to scientism. But in fact, scientism is the opposite of true science. The academics who dismiss psi on the grounds that it “can’t possibly be true” are behaving in the same way as the fundamentalist Christians who refuse to consider the evidence for evolution. Skeptics who refuse to engage with the evidence for telepathy or precognition are acting in the same way as the contemporaries of Galileo who refused to look through his telescope, unwilling to face the possibility that their beliefs may need to be revised.
1. Wahbeh H, Radin D, Mossbridge J, Vieten C, Delorme A. Exceptional experiences reported by scientists and engineers. Explore (NY). 2018 Sep;14(5):329-341. doi: 10.1016/j.explore.2018.05.002. Epub 2018 Aug 2. PMID: 30415782.
2. Rice TW. Believe It Or Not: Religious and Other Paranormal Beliefs in the United States. J Sci Study Relig. 2003;42(1):95-106. doi:10.1111/1468-5906.00163
3. Monteiro de Barros MC, Leão FC, Vallada Filho H, Lucchetti G, Moreira-Almeida A, Prieto Peres MF. Prevalence of spiritual and religious experiences in the general population: A Brazilian nationwide study. Transcultural Psychiatry. April 2022. doi:10.1177/13634615221088701
5. Storm L, Tressoldi P. Meta-analysis of free-response studies 2009-2018: Assessing the noise-reduction model ten years on. J Soc Psych Res. 2020;(84):193-219.
6. Reber, A. S., & Alcock, J. E. (2020). Searching for the impossible: Parapsychology’s elusive quest. American Psychologist, 75(3), 391–399. https://doi.org/10.1037/amp0000486
7. Branković M. Who Believes in ESP: Cognitive and Motivational Determinants of the Belief in Extra-Sensory Perception. Eur J Psychol. 2019;15(1):120-139. doi:10.5964/ejop.v15i1.1689
8. Bem D, Tressoldi P, Rabeyron T, Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Research. 2015;4:1188. doi:10.12688/f1000research.7177.2
By Marco Silva BBC climate disinformation specialist
Climate “doomers” believe the world has already lost the battle against global warming. That’s wrong – and while that view is spreading online, there are others who are fighting the viral tide.
As he walked down the street wearing a Jurassic Park cap, Charles McBryde raised his smartphone, stared at the camera, and hit the record button.
“Ok, TikTok, I need your help.”
Charles is 27 and lives in California. His quirky TikTok videos about news, history, and politics have earned him more than 150,000 followers.
In the video in question, recorded in October 2021, he decided it was time for a confession.
“I am a climate doomer,” he said. “Since about 2019, I have believed that there’s little to nothing that we can do to actually reverse climate change on a global scale.”
Climate doomism is the idea that we are past the point of being able to do anything at all about global warming – and that mankind is highly likely to become extinct.
That’s wrong, scientists say, but the argument is picking up steam online.
‘Give me hope’
Charles admitted to feeling overwhelmed, anxious and depressed about global warming, but he followed up with a plea.
“I’m calling on the activists and the scientists of TikTok to give me hope,” he said. “Convince me that there’s something out there that’s worth fighting for, that in the end we can achieve victory over this, even if it’s only temporary.”
And it wasn’t long before someone answered.
Facing up to the ‘doomers’
Alaina Wood is a sustainability scientist based in Tennessee. On TikTok she’s known as thegarbagequeen.
After watching Charles’ video, she posted a reply, explaining in simple terms why he was wrong.
Alaina makes a habit of challenging climate doomism – a mission she has embraced with a sense of urgency.
“People are giving up on activism because they’re like, ‘I can’t handle it any more… This is too much…’ and ‘If it really is too late, why am I even trying?'” she says. “Doomism ultimately leads to climate inaction, which is the opposite of what we want.”
Why it’s not too late
Climate scientist Dr Friederike Otto, who has been working with the UN’s Intergovernmental Panel on Climate Change, says: “I don’t think it’s helpful to pretend that climate change will lead to humanity’s extinction.”
It involves “rapid, deep and immediate” cuts in emissions of greenhouse gases – which trap the sun’s heat and make the planet hotter.
“There is no denying that there are large changes across the globe, and that some of them are irreversible,” says Dr Otto, a senior lecturer in climate science at the Grantham Institute for Climate Change and the Environment.
“It doesn’t mean the world is going to end – but we have to adapt, and we have to stop emitting.”
An overwhelming majority of the respondents said they were willing to change the way they lived to tackle the problem.
But when asked how confident they were that climate action would significantly reduce the effects of global warming, more than half said they had little to no confidence.
Doomism taps into, and exaggerates, that sense of hopelessness. In Charles’s case, it all began with a community on Reddit devoted to the potential collapse of civilisation.
“The most apocalyptic language that I would find was actually coming from former climate scientists,” Charles says.
It’s impossible to know whether the people posting the messages Charles read were genuine scientists.
But the posts had a profound effect on him. He admits: “I do think I fell down the rabbit hole.”
Alaina Wood, the sustainability scientist, says Charles’s story is not unusual.
“I rarely at this point encounter climate denial or any other form of misinformation [on social media],” she says. “It’s not people saying, ‘Fossil fuels don’t cause climate change’ … It’s people saying, ‘It’s too late’.”
TikTok’s rules forbid misinformation that causes harm. We sent the company some videos that Alaina has debunked in the past. None was found to have violated the rules.
TikTok says it works with accredited fact-checkers to “limit the spread of false or misleading climate information”.
Young and pessimistic
Although it can take many forms (and is thus difficult to accurately measure), Alaina says doomism is particularly popular among young people.
“There’s people who are climate activists and they’re so scared. They want to make change, but they feel they need to spread fear-based content to do so,” she says.
“Then there are people who know that fear in general goes viral, and they’re just following trends, even if they don’t necessarily understand the science.”
I’ve watched several of the videos that she debunked. Invariably, they feature young users voicing despair about the future.
“Let me tell you why I don’t know what I want to do with my life and why I’m not planning,” says one young woman. “By the year 2050, most of us should be underwater from global warming.” But that’s a gross exaggeration of what climate scientists are actually telling us.
“A lot of that is often fatalistic humour, but people on TikTok are interpreting that as fact,” Alaina says.
But is Charles still among them, after watching Alaina’s debunks? Is he still a climate doomer?
“I would say no,” he tells me. “I have convinced myself that we can get out of this.”
Tradicionalmente, os gestores elaboram políticas públicas tendo como base um agente econômico racional, ou seja, uma pessoa capaz de avaliar cada decisão, maximizando sua utilidade para interesse próprio. Ignoram, porém, as poderosas influências psicológicas e sociais que afetam o comportamento humano e desconsideram que pessoas são falíveis, inconstantes e emocionais: têm problemas com autocontrole, procrastinam, preferem o status quo e são seres sociais. É com base nesse agente “não tão racional” que as ciências comportamentais se apresentam para complementar a forma tradicional de fazer política.
Por exemplo: já nos aproximamos da marca de dois anos desde a declaração pela Organização Mundial da Saúde de estado de pandemia da Covid-19 em 11 de março de 2020. Foram anos desafiadores para governos, empresas e indivíduos. Mas apesar de 2021 ter apresentado sinais de recuperação, há ainda um longo e árduo caminho a ser percorrido para retornar ao menos às condições pré-pandemia. Não apenas na saúde, mas também no equilíbrio das economias, no aumento da produtividade, na retomada de empregos, na recuperação das lacunas de aprendizagem, na melhora do ambiente de negócios, no combate às mudanças climáticas, etc. Obviamente, essa não é uma tarefa simples para governos e organizações. Poderíamos encarar esses desafios de forma diferente e adaptar a maneira de fazer políticas públicas para torná-las mais eficientes e custo-efetivas, aumentando seus impactos e alcance?
A resposta é sim. O sucesso de políticas públicas depende, em parte, da tomada de decisão e da mudança de comportamentos. Por isso, focar mais nas pessoas e no contexto da tomada de decisão se torna cada vez mais imperativo. É importante considerar como pessoas se relacionam entre si e com instituições, como se portam frente às políticas e conhecer bem o ambiente em que estão inseridas.
A abordagem comportamental é científica e alia conceitos da psicologia, economia, antropologia, sociologia e neurociência. Orientada pelo contexto e baseada em evidências, concilia teoria e prática em diversos setores. Sua aplicação pode abranger uma simples mudança no ambiente da tomada de decisão (arquitetura de escolhas), um “empurrãozinho” (nudge) para influenciar a melhor decisão para o indivíduo, mantendo liberdade de escolhas, e pode ser mais ampla, visando a mudança de hábito. Para além disso, pode ser chave no enfrentamento de desafios de políticas como abandono escolar, violência doméstica e de gênero, pagamento de impostos, redução de corrupção, desastres naturais, mudanças climáticas, entre outros.
O uso de insights comportamentais em políticas públicas já não é mais novidade. Mais de uma década se passou desde a publicação (2008) do livro Nudge (“Nudge: como tomar melhores decisões sobre saúde, dinheiro e felicidade”, em português), que impulsionou o campo de forma espetacular. Conceitos da psicologia, já amplamente discutidos e aceitos por décadas, foram utilizados no contexto das decisões econômicas e, assim, a economia/ciência comportamental se consolidou.
Acompanhando a expansão e relevância do tema, o Banco Mundial, lançou em 2015 o Relatório sobre o Desenvolvimento Mundial: Mente, Sociedade e Comportamento. Em 2016, iniciou sua própria unidade comportamental, a eMBeD (Unidade Mente, Comportamento e Desenvolvimento) e tem promovido o uso sistemático de insights comportamentais em políticas e projetos de desenvolvimento e apoiado diversos países para solucionar problemas de forma rápida e escalável.
No Brasil, temos atuado na capacitação de gestores para o uso de insights comportamentais, em contribuições em pesquisas, como na Pesquisa sobre Ética e Corrupção no Serviço Público Federal (Banco Mundial e CGU) e em apoio técnico na identificação de evidências, como para informar soluções para aumentar a poupança entre a população de baixa renda. Nossos especialistas prepararam também diagnósticos comportamentais para entender por que clientes não pagam a conta em dia ou deixam de se conectar ao sistema de esgoto. Realizamos experimentos com mensagens comportamentais a fim de estimular a utilização de meios digitais de pagamentos e incentivar o pagamento de contas em dia no setor de água e saneamento. Neste último, apresentando resultados positivos com possibilidade de aumento de arrecadação a um custo baixo, já que as mensagens ressaltando consequências e reciprocidade, por exemplo, aumentaram os pagamentos em dia e a quantia total paga. Para cada mil clientes que receberam o SMS com insights comportamentais, de seis a 11 clientes a mais pagaram as contas. Para 2022, há atividades planejadas, como parte de um projeto de desenvolvimento, que usará insights comportamentais para reduzir o descarte de resíduos em sistemas de drenagem e aumentar o uso consciente de espaços públicos.
As ciências comportamentais não são a solução para os grandes desafios globais. Mas é preciso ressaltar o potencial de sua complementariedade na construção de políticas públicas. Cabe aos gestores aproveitarem esse momento de maior maturidade da área para expandirem seus conhecimentos. Vale ainda surfar na onda de ascensão de áreas complementares, como cesign e ciência de dados, para centrar o olhar no indivíduo e no contexto da decisão e, baseando-se em evidências e de maneira transparente, influenciar as escolhas e promover mudança de comportamento, de forma a aumentar o impacto das políticas públicas a fim de não só retomar as condições pré-Covid, mas melhorar ainda mais a vida e o bem-estar de todos, especialmente da população mais pobre e vulnerável.
Esta coluna foi escrita em colaboração com meus colegas do Banco Mundial Juliana Neves Soares Brescianini, analista de operações, e Luis A. Andrés, líder de programa do setor de Infraestrutura.
Ten years ago, psychologists proposed that a wide range of people would suffer anxiety and grief over climate. Skepticism about that idea is gone.
Published Feb. 6, 2022; Updated Feb. 7, 2022
PORTLAND, Ore. — It would hit Alina Black in the snack aisle at Trader Joe’s, a wave of guilt and shame that made her skin crawl.
Something as simple as nuts. They came wrapped in plastic, often in layers of it, that she imagined leaving her house and traveling to a landfill, where it would remain through her lifetime and the lifetime of her children.
She longed, really longed, to make less of a mark on the earth. But she had also had a baby in diapers, and a full-time job, and a 5-year-old who wanted snacks. At the age of 37, these conflicting forces were slowly closing on her, like a set of jaws.
In the early-morning hours, after nursing the baby, she would slip down a rabbit hole, scrolling through news reports of droughts, fires, mass extinction. Then she would stare into the dark.
It was for this reason that, around six months ago, she searched “climate anxiety” and pulled up the name of Thomas J. Doherty, a Portland psychologist who specializes in climate.
A decade ago, Dr. Doherty and a colleague, Susan Clayton, a professor of psychology at the College of Wooster, published a paper proposing a new idea. They argued that climate change would have a powerful psychological impact — not just on the people bearing the brunt of it, but on people following it through news and research. At the time, the notion was seen as speculative.
That skepticism is fading. Eco-anxiety, a concept introduced by young activists, has entered a mainstream vocabulary. And professional organizations are hurrying to catch up, exploring approaches to treating anxiety that is both existential and, many would argue, rational.
Though there is little empirical data on effective treatments, the field is expanding swiftly. The Climate Psychology Alliance provides an online directory of climate-aware therapists; the Good Grief Network, a peer support network modeled on 12-step addiction programs, has spawned more than 50 groups; professional certification programs in climate psychology have begun to appear.
As for Dr. Doherty, so many people now come to him for this problem that he has built an entire practice around them: an 18-year-old student who sometimes experiences panic attacks so severe that she can’t get out of bed; a 69-year-old glacial geologist who is sometimes overwhelmed with sadness when he looks at his grandchildren; a man in his 50s who erupts in frustration over his friends’ consumption choices, unable to tolerate their chatter about vacations in Tuscany.
The field’s emergence has met resistance, for various reasons. Therapists have long been trained to keep their own views out of their practices. And many leaders in mental health maintain that anxiety over climate change is no different, clinically, from anxiety caused by other societal threats, like terrorism or school shootings. Some climate activists, meanwhile, are leery of viewing anxiety over climate as dysfunctional thinking — to be soothed or, worse, cured.
But Ms. Black was not interested in theoretical arguments; she needed help right away.
She was no Greta Thunberg type, but a busy, sleep-deprived working mom. Two years of wildfires and heat waves in Portland had stirred up something sleeping inside her, a compulsion to prepare for disaster. She found herself up at night, pricing out water purification systems. For her birthday, she asked for a generator.
She understands how privileged she is; she describes her anxiety as a “luxury problem.” But still: The plastic toys in the bathtub made her anxious. The disposable diapers made her anxious. She began to ask herself, what is the relationship between the diapers and the wildfires?
“I feel like I have developed a phobia to my way of life,” she said.
An Idea on the Edge Spreads Out
Last fall, Ms. Black logged on for her first meeting with Dr. Doherty, who sat, on video, in front of a large, glossy photograph of evergreens.
At 56, he is one of the most visible authorities on climate in psychotherapy, and he hosts a podcast, “Climate Change and Happiness.” In his clinical practice, he reaches beyond standard treatments for anxiety, like cognitive behavioral therapy, to more obscure ones, like existential therapy, conceived to help people fight off despair, and ecotherapy, which explores the client’s relationship to the natural world.
He did not take the usual route to psychology; after graduating from Columbia University, he hitchhiked across the country to work on fishing boats in Alaska, then as a whitewater rafting guide — “the whole Jack London thing” — and as a Greenpeace fund-raiser. Entering graduate school in his 30s, he fell in naturally with the discipline of “ecopsychology.”
At the time, ecopsychology was, as he put it, a “woo-woo area,” with colleagues delving into shamanic rituals and Jungian deep ecology. Dr. Doherty had a more conventional focus, on the physiological effects of anxiety. But he had picked up on an idea that was, at that time, novel: that people could be affected by environmental decay even if they were not physically caught in a disaster.
Recent research has left little doubt that this is happening. A 10-country survey of 10,000 people aged 16 to 25 published last month in The Lancet found startling rates of pessimism. Forty-five percent of respondents said worry about climate negatively affected their daily life. Three-quarters said they believed “the future is frightening,” and 56 percent said “humanity is doomed.”
The blow to young people’s confidence appears to be more profound than with previous threats, such as nuclear war, Dr. Clayton said. “We’ve definitely faced big problems before, but climate change is described as an existential threat,” she said. “It undermines people’s sense of security in a basic way.”
Caitlin Ecklund, 37, a Portland therapist who finished graduate school in 2016, said that nothing in her training — in subjects like buried trauma, family systems, cultural competence and attachment theory — had prepared her to help the young women who began coming to her describing hopelessness and grief over climate. She looks back on those first interactions as “misses.”
“Climate stuff is really scary, so I went more toward soothing or normalizing,” said Ms. Ecklund, who is part of a group of therapists convened by Dr. Doherty to discuss approaches to climate. It has meant, she said, “deconstructing some of that formal old-school counseling that has implicitly made things people’s individual problems.”
‘Obviously, it would be nice to be happy’
Many of Dr. Doherty’s clients sought him out after finding it difficult to discuss climate with a previous therapist.
Caroline Wiese, 18, described her previous therapist as “a typical New Yorker who likes to follow politics and would read The New York Times, but also really didn’t know what a Keeling Curve was,” referring to the daily record of carbon dioxide concentration.
Ms. Wiese had little interest in “Freudian B.S.” She sought out Dr. Doherty for help with a concrete problem: The data she was reading was sending her into “multiday panic episodes” that interfered with her schoolwork.
In their sessions, she has worked to carefully manage what she reads, something she says she needs to sustain herself for a lifetime of work on climate. “Obviously, it would be nice to be happy,” she said, “but my goal is more to just be able to function.”
Frank Granshaw, 69, a retired professor of geology, wanted help hanging on to what he calls “realistic hope.”
He recalls a morning, years ago, when his granddaughter crawled into his lap and fell asleep, and he found himself overwhelmed with emotion, considering the changes that would occur in her lifetime. These feelings, he said, are simply easier to unpack with a psychologist who is well versed on climate. “I appreciate the fact that he is dealing with emotions that are tied into physical events,” he said.
As for Ms. Black, she had never quite accepted her previous therapist’s vague reassurances. Once she made an appointment with Dr. Doherty, she counted the days. She had a wild hope that he would say something that would simply cause the weight to lift.
That didn’t happen. Much of their first session was devoted to her doomscrolling, especially during the nighttime hours. It felt like a baby step.
“Do I need to read this 10th article about the climate summit?” she practiced asking herself. “Probably not.”
A Knot Loosens: ‘There Will Be Good Days’
Several sessions came and went before something really happened.
Ms. Black remembers going into an appointment feeling distraught. She had been listening to radio coverage of the international climate summit in Glasgow last fall and heard a scientist interviewed. What she perceived in his voice was flat resignation.
That summer, Portland had been trapped under a high-pressure system known as a “heat dome,” sending temperatures to 116 degrees. Looking at her own children, terrible images flashed through her head, like a field of fire. She wondered aloud: Were they doomed?
Dr. Doherty listened quietly. Then he told her, choosing his words carefully, that the rate of climate change suggested by the data was not as swift as what she was envisioning.
“In the future, even with worst-case scenarios, there will be good days,” he told her, according to his notes. “Disasters will happen in certain places. But, around the world, there will be good days. Your children will also have good days.”
At this, Ms. Black began to cry.
She is a contained person — she tends to deflect frightening thoughts with dark humor — so this was unusual. She recalled the exchange later as a threshold moment, the point when the knot in her chest began to loosen.
“I really trust that when I hear information from him, it’s coming from a deep well of knowledge,” she said. “And that gives me a lot of peace.”
Dr. Doherty recalled the conversation as “cathartic in a basic way.” It was not unusual, in his practice; many clients harbor dark fears about the future and have no way to express them. “It is a terrible place to be,” he said.
A big part of his practice is helping people manage guilt over consumption: He takes a critical view of the notion of a climate footprint, a construct he says was created by corporations in order to shift the burden to individuals.
He uses elements of cognitive behavioral therapy, like training clients to manage their news intake and look critically at their assumptions.
He also draws on logotherapy, or existential therapy, a field founded by Viktor E. Frankl, who survived German concentration camps and then wrote “Man’s Search for Meaning,” which described how prisoners in Auschwitz were able to live fulfilling lives.
“I joke, you know it’s bad when you’ve got to bring out the Viktor Frankl,” he said. “But it’s true. It is exactly right. It is of that scale. It is that consolation: that ultimately I make meaning, even in a meaningless world.”
At times, over the last few months, Ms. Black could feel some of the stress easing.
On weekends, she practices walking in the woods with her family without allowing her mind to flicker to the future. Her conversations with Dr. Doherty, she said, had “opened up my aperture to the idea that it’s not really on us as individuals to solve.”
Sometimes, though, she’s not sure that relief is what she wants. Following the news about the climate feels like an obligation, a burden she is meant to carry, at least until she is confident that elected officials are taking action.
Her goal is not to be released from her fears about the warming planet, or paralyzed by them, but something in between: She compares it to someone with a fear of flying, who learns to manage their fear well enough to fly.
“On a very personal level,” she said, “the small victory is not thinking about this all the time.”
Stories may be the most overlooked climate solution of all. By
December 23, 2021
There is a lot of shouting about climate change, especially in North America and Europe. This makes it easy for the rest of the world to fall into a kind of silence—for Westerners to assume that they have nothing to add and should let the so-called “experts” speak. But we all need to be talking about climate change and amplifying the voices of those suffering the most.
Climate science is crucial, but by contextualizing that science with the stories of people actively experiencing climate change, we can begin to think more creatively about technological solutions.
This needs to happen not only at major international gatherings like COP26, but also in an everyday way. In any powerful rooms where decisions are made, there should be people who can speak firsthand about the climate crisis. Storytelling is an intervention into climate silence, an invitation to use the ancient human technology of connecting through language and narrative to counteract inaction. It is a way to get often powerless voices into powerful rooms.
That’s what I attempted to do by documenting stories of people already experiencing the effects of a climate in crisis.
In 2013, I was living in Boston during the marathon bombing. The city was put on lockdown, and when it lifted, all I wanted was to go outside: to walk and breathe and hear the sounds of other people. I needed to connect, to remind myself that not everyone is murderous. In a fit of inspiration, I cut open a broccoli box and wrote “Open call for stories” in Sharpie.
I wore the cardboard sign around my neck. People mostly stared. But some approached me. Once I started listening to strangers, I didn’t want to stop.
That summer, I rode my bicycle down the Mississippi River on a mission to listen to any stories that people had to share. I brought the sign with me. One story was so sticky that I couldn’t stop thinking about it for months, and it ultimately set me off on a trip around the world.
“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane. I couldn’t imagine living anywhere else.”
I met 57-year-old Franny Connetti 80 miles south of New Orleans, when I stopped in front of her office to check the air in my tires; she invited me in to get out of the afternoon sun. Franny shared her lunch of fried shrimp with me. Between bites she told me how Hurricane Isaac had washed away her home and her neighborhood in 2012.
Despite that tragedy, she and her husband moved back to their plot of land, in a mobile home, just a few months after the storm.
“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane,” she told me. “I couldn’t imagine living anywhere else.”
Twenty miles ahead, I could see where the ocean lapped over the road at high tide. “Water on Road,” an orange sign read. Locals jokingly refer to the endpoint of Louisiana State Highway 23 as “The End of the World.” Imagining the road I had been biking underwater was chilling.
Here was one front line of climate change, one story. What would it mean, I wondered, to put this in dialogue with stories from other parts of the world—from other front lines with localized impacts that were experienced through water? My goal became to listen to and amplify those stories.
Water is how most of the world will experience climate change. It’s not a human construct, like a degree Celsius. It’s something we acutely see and feel. When there’s not enough water, crops die, fires rage, and people thirst. When there’s too much, water becomes a destructive force, washing away homes and businesses and lives. It’s almost always easier to talk about water than to talk about climate change. But the two are deeply intertwined.
I also set out to address another problem: the language we use to discuss climate change is often abstract and inaccessible. We hear about feet of sea-level rise or parts per million of carbon dioxide in the atmosphere, but what does this really mean for people’s everyday lives? I thought storytelling might bridge this divide.
One of the first stops on my journey was Tuvalu, a low-lying coral atoll nation in the South Pacific, 585 miles south of the equator. Home to around 10,000 people, Tuvalu is on track to become uninhabitable in my lifetime.
In 2014 Tauala Katea, a meteorologist, opened his computer to show me an image of a recent flood on one island. Seawater had bubbled up under the ground near where we were sitting. “This is what climate change looks like,” he said.
“In 2000, Tuvaluans living in the outer islands noticed that their taro and pulaka crops were suffering,” he said. “The root crops seemed rotten, and the size was getting smaller and smaller.” Taro and pulaka, two starchy staples of Tuvaluan cuisine, are grown in pits dug underground.
Tauala and his team traveled to the outer islands to take soil samples. The culprit was saltwater intrusion linked to sea-level rise. The seas have been rising four millimeters per year since measurements began in the early 1990s. While that might sound like a small amount, this change has a dramatic impact on Tuvaluans’ access to drinking water. The highest point is only 13 feet above sea level.
A lot has changed in Tuvalu as a result. The freshwater lens, a layer of groundwater that floats above denser seawater, has become salty and contaminated. Thatched roofs and freshwater wells are now a thing of the past. Each home now has a water tank attached to a corrugated-iron roof by a gutter. All the water for washing, cooking, and drinking now comes from the rain. This rainwater is boiled for drinking and used to wash clothes and dishes, as well as for bathing. The wells have been repurposed as trash heaps.
At times, families have to make tough decisions about how to allocate water. Angelina, a mother of three, told me that during a drought a few years ago, her middle daughter, Siulai, was only a few months old. She, her husband, and their oldest daughter could swim in the sea to wash themselves and their clothes. “We only saved water to drink and cook,” she said. But her newborn’s skin was too delicate to bathe in the ocean. The salt water would give her a horrible rash. That meant Angelina had to decide between having water to drink and to bathe her child.
The stories I heard about water and climate change in Tuvalu reflected a sharp division along generational lines. Tuvaluans my age—like Angelina—don’t see their future on the islands and are applying for visas to live in New Zealand. Older Tuvaluans see climate change as an act of God and told me they couldn’t imagine living anywhere else; they didn’t want to leave the bones of their ancestors, which were buried in their front yards. Some things just cannot be moved.
Organizations like the United Nations Development Programme are working to address climate change in Tuvalu by building seawalls and community water tanks. Ultimately these adaptations seem to be prolonging the inevitable. It is likely that within my lifetime, many Tuvaluans will be forced to call somewhere else home.
Tuvalu shows how climate change exacerbates both food and water insecurity—and how that insecurity drives migration. I saw this in many other places. Mess with the amount of water available in one location, and people will move.
In Thailand I met a modern dancer named Sun who moved to Bangkok from the rural north. He relocated to the city in part to practice his art, but also to take refuge from unpredictable rain patterns. Farming in Thailand is governed by the seasonal monsoons, which dump rain, fill river basins, and irrigate crops from roughly May to September. Or at least they used to. When we spoke in late May 2016, it was dry in Thailand. The rains were delayed. Water levels in the country’s biggest dams plummeted to less than 10% of their capacity—the worst drought in two decades.
“Right now it’s supposed to be the beginning of the rainy season, but there is no rain,” Sun told me. “How can I say it? I think the balance of the weather is changing. Some parts have a lot of rain, but some parts have none.” He leaned back in his chair, moving his hands like a fulcrum scale to express the imbalance. “That is the problem. The people who used to be farmers have to come to Bangkok because they want money and they want work,” he said. “There is no more work because of the weather.”
Migration to the city, in other words, is hastened by the rain. Any tech-driven climate solutions that fail to address climate migration—so central to the personal experience of Sun and many others in his generation around the world—will be at best incomplete, and at worst potentially dangerous. Solutions that address only one region, for example, could exacerbate migration pressures in another.
I heard stories about climate-driven food and water insecurity in the Arctic, too. Igloolik, Nunavut, 1,400 miles south of the North Pole, is a community of 1,700 people. Marie Airut, a 71-year-old elder, lives by the water. We spoke in her living room over cups of black tea.
“My husband died recently,” she told me. But when he was alive, they went hunting together in every season; it was their main source of food. “I’m not going to tell you what I don’t know. I’m going to tell you only the things that I have seen,” she said. In the 1970s and ’80s, the seal holes would open in late June, an ideal time for hunting baby seals. “But now if I try to go out hunting at the end of June, the holes are very big and the ice is really thin,” Marie told me. “The ice is melting too fast. It doesn’t melt from the top; it melts from the bottom.”
When the water is warmer, animals change their movement. Igloolik has always been known for its walrus hunting. But in recent years, hunters have had trouble reaching the animals. “I don’t think I can reach them anymore, unless you have 70 gallons of gas. They are that far now, because the ice is melting so fast,” Marie said. “It used to take us half a day to find walrus in the summer, but now if I go out with my boys, it would probably take us two days to get some walrus meat for the winter.”
Marie and her family used to make fermented walrus every year, “but this year I told my sons we’re not going walrus hunting,” she said. “They are too far.”
Para especialista, fenômeno precisa ser visto com cautela para que medo não se transforme em negacionismo
Enquanto conversava com a reportagem por telefone, o advogado Leandro Luz, 29, confessa que está nervoso. A angústia em sua fala se refere ao tema da conversa que envolve um de seus maiores medos: a crise climática.
Ler, ouvir e falar sobre aumento da temperatura na Terra, queimadas na Amazônia, derretimento de geleiras e desastres ambientais cada vez mais frequentes deixam Luz nervoso. Quando se depara com o tema, ele sente taquicardia e suor frio nas palmas das mãos e costas.
Até pouco tempo, ele não entendia bem o que sentia, até que descobriu sofrer da chamada eco-ansiedade. O termo, que aparece em um relatório divulgado pela Associação Americana de Psicologia em 2017 e foi incluído no dicionário Oxford no final de outubro de 2021, é descrito como um medo crônico sobre a destruição ambiental acompanhado do sentimento de culpa por contribuições individuais e o impacto disso nas gerações futuras.
A primeira vez que Luz prestou atenção às questões climática foi após o tsunami em Fukushima, no Japão, quando ondas gigantes mataram 18 mil pessoas. Hoje, ele vive em Salvador, mas conta que pensa em se mudar para o interior. “Converso com a minha namorada de morar longe da costa, mas sei que esses locais também serão afetados”, diz ele que relata viver em um grande dilema.
“Não sei como me comportar nos próximos 30 anos, procuro evitar o consumo desenfreado e evito produzir muito lixo plástico, mas sei que são atitudes muito pontuais que, a grosso modo, não vão mudar a realidade”.
O advogado, porém, também critica o governo sobre sua postura diante da crise climática. Para ele, por exemplo, a prioridade de autoridades deveria estar na mudança da matriz energética brasileira. “Mas, estamos no caminho oposto, voltamos a discutir a implementação de usinas de carvão para produção de energia no Brasil, algo que é totalmente rudimentar”.
Assim como Leandro Luz, a aluna do ensino médio Mariana dos Santos, 16, se recorda de chorar copiosamente quando criança após assistir a reportagens sobre mudança climática. Hoje, ela diz que apesar de não desabar mais diante das notícias, a ansiedade vira e mexe ainda a abala.
Ela costuma temer, por exemplo, o aumento do nível da água dos oceanos. “Penso nas cidades que podem desaparecer e as consequências que isso pode acarretar. Isso se torna uma bola de neve. Sei que não dá para fazer muito e é isso que desencadeia o desespero”, diz.
A estudante de gestão ambiental Maria Antônia Luna, 20, também descobriu recentemente que o aperto no peito, sensação de falta de ar ao ler notícias sobre o incêndio que atingiu o Pantanal em 2020 se referem à eco-ansiedade.
“A sensação é de uma angústia de que nada vai melhorar”, define ela que agora busca uma terapia que a ajude a enfrentar aflições relacionadas às crises climáticas, tópico frequente em sua graduação.
Marina, Maria e Leandro não são casos isolados. Um estudo publicado no The Lancet Planetary Health, no início de setembro, analisou a ansiedade climática entre jovens de dez países, como Brasil, Estados Unidos, Índia, Filipinas, Finlândia e França.
O artigo, em preprint (não revisado por pares), ouviu 10 mil jovens de 16 a 25 anos e apontou que a maioria sente com medo, raiva, tristeza, desespero, culpa e vergonha diante de problemas ecológicos.
Ao todo, 58% consideram que seus governos traíram os jovens e as gerações futuras. Apenas franceses e finlandeses não concordam majoritariamente com a afirmação. Quando os números são destrinchados por países, a sensação de traição tanto por parte dos adultos quanto dos governantes é mais latente entre os brasileiros (77%), seguido por indianos (66%).
Para Alexandre Araújo Costa, físico e pesquisador de crises climáticas há 20 anos, a pesquisa aponta também para um olhar otimista, ou seja, o potencial de conscientização maior entre os mais jovens.
“Eles sentem que o Brasil não faz nada para evitar a atual situação e isso pode ser bom para mobilizar”, diz Costa. Segundo ele, não é possível hoje evitar que o assunto seja debatido. “A consequência relacionada à saúde mental é preocupante, mas não podemos manter nossas crianças e jovens em uma redoma dizendo que está tudo bem, quando corremos o risco de perder a Amazônia”, afirma.
O professor ainda analisa que a situação não deve ser vista apenas como um sofrimento individual, já que todos vão acabar impactados de uma forma com a crise ambiental. “É preciso que a gente troque esse governo que dá de ombros para o problema ou que é sequestrado por interesses econômicos que só visam lucro de curto prazo”, diz.
A bióloga Beatriz Ramos segue a linha de Costa. Para ela, o perigo da eco-ansiedade é a vontade de não saber o que está acontecendo. “Ao nos afastarmos dos fatos, podemos entrar em um processo de negação.’”
“É preciso falar o que vai acontecer, como podemos prevenir, quais são as possíveis soluções e explicar que vai acontecer um aumento de eventos extremos, mas existem formas de nos adaptarmos e ainda temos tempo de mitigar isso. Não dá para agir só com otimismo ou só com a sensação apocalíptica”, diz.
“Eu respiro fumaça de incêndio. É triste, mas é uma forma que encontrei de não me esconder. A sensação de impotência diminui, sinto que não estou parada assistindo à destruição”, diz ela que relata que nos piores momentos do ano passado presenciou cenas desesperadoras de animais agonizando vivos.
A angústia diante às crises climáticas parece cada vez mais latente e atinge, principalmente, os mais jovens. Em Portugal, de acordo com uma reportagem publicada pela Agência Lusa, o termo traz um novo desafio aos psicólogos. Já no Brasil, o uso do termo ainda é emergente, apontam especialistas.
O antropólogo Rodrigo Toniol, por exemplo, não acredita que esse diagnóstico vá emplacar. “Não acho que a gente vai chegar num consultório e será um diagnóstico à mão de todos os psiquiatras, mas eu acho que esse é um sintoma relevante que aponta para problemas ligados à falta de um pacto social”, diz ele.
Para o psicanalista e professor do Instituto de Psicologia da USP Christian Dunker diz que os efeitos da ansiedade causada pelo clima são colaterais. Dunker reflete que, na verdade, nota no consultório o crescente sentimento de injustiça quanto às situações que demandariam ações que não são sendo tomadas, como desigualdade social, racismo, homofobia e desigualdade de gênero.
“No bojo desta modificação da nossa indignação aparece a situação em que passamos a enxergar o planeta como alguém e não como algo”, analisa.
Research shows that a positive attitude to ageing can lead to a longer, healthier life, while negative beliefs can have hugely detrimental effects
For more than a decade, Paddy Jones has been wowing audiences across the world with her salsa dancing. She came to fame on the Spanish talent show Tú Sí Que Vales (You’re Worth It) in 2009 and has since found success in the UK, through Britain’s Got Talent; in Germany, on Das Supertalent; in Argentina, on the dancing show Bailando; and in Italy, where she performed at the Sanremo music festival in 2018 alongside the band Lo Stato Sociale.
Jones also happens to be in her mid-80s, making her the world’s oldest acrobatic salsa dancer, according to Guinness World Records. Growing up in the UK, Jones had been a keen dancer and had performed professionally before she married her husband, David, at 22 and had four children. It was only in retirement that she began dancing again – to widespread acclaim. “I don’t plead my age because I don’t feel 80 or act it,” Jones told an interviewer in 2014.
According to a wealth of research that now spans five decades, we would all do well to embrace the same attitude – since it can act as a potent elixir of life. People who see the ageing process as a potential for personal growth tend to enjoy much better health into their 70s, 80s and 90s than people who associate ageing with helplessness and decline, differences that are reflected in their cells’ biological ageing and their overall life span.
Of all the claims I have investigated for my new book on the mind-body connection, the idea that our thoughts could shape our ageing and longevity was by far the most surprising. The science, however, turns out to be incredibly robust. “There’s just such a solid base of literature now,” says Prof Allyson Brothers at Colorado State University. “There are different labs in different countries using different measurements and different statistical approaches and yet the answer is always the same.”
If I could turn back time
The first hints that our thoughts and expectations could either accelerate or decelerate the ageing process came from a remarkable experiment by the psychologist Ellen Langer at Harvard University.
In 1979, she asked a group of 70- and 80-year-olds to complete various cognitive and physical tests, before taking them to a week-long retreat at a nearby monastery that had been redecorated in the style of the late 1950s. Everything at the location, from the magazines in the living room to the music playing on the radio and the films available to watch, was carefully chosen for historical accuracy.
The researchers asked the participants to live as if it were 1959. They had to write a biography of themselves for that era in the present tense and they were told to act as independently as possible. (They were discouraged from asking for help to carry their belongings to their room, for example.) The researchers also organised twice-daily discussions in which the participants had to talk about the political and sporting events of 1959 as if they were currently in progress – without talking about events since that point. The aim was to evoke their younger selves through all these associations.
To create a comparison, the researchers ran a second retreat a week later with a new set of participants. While factors such as the decor, diet and social contact remained the same, these participants were asked to reminisce about the past, without overtly acting as if they were reliving that period.
Most of the participants showed some improvements from the baseline tests to the after-retreat ones, but it was those in the first group, who had more fully immersed themselves in the world of 1959, who saw the greatest benefits. Sixty-three per cent made a significant gain on the cognitive tests, for example, compared to just 44% in the control condition. Their vision became sharper, their joints more flexible and their hands more dextrous, as some of the inflammation from their arthritis receded.
As enticing as these findings might seem, Langer’s was based on a very small sample size. Extraordinary claims need extraordinary evidence and the idea that our mindset could somehow influence our physical ageing is about as extraordinary as scientific theories come.
Becca Levy, at the Yale School of Public Health, has been leading the way to provide that proof. In one of her earliest – and most eye-catching – papers, she examined data from the Ohio Longitudinal Study of Aging and Retirement that examined more than 1,000 participants since 1975.
The participants’ average age at the start of the survey was 63 years old and soon after joining they were asked to give their views on ageing. For example, they were asked to rate their agreement with the statement: “As you get older, you are less useful”. Quite astonishingly, Levy found the average person with a more positive attitude lived on for 22.6 years after the study commenced, while the average person with poorer interpretations of ageing survived for just 15 years. That link remained even after Levy had controlled for their actual health status at the start of the survey, as well as other known risk factors, such as socioeconomic status or feelings of loneliness, which could influence longevity.
The implications of the finding are as remarkable today as they were in 2002, when the study was first published. “If a previously unidentified virus was found to diminish life expectancy by over seven years, considerable effort would probably be devoted to identifying the cause and implementing a remedy,” Levy and her colleagues wrote. “In the present case, one of the likely causes is known: societally sanctioned denigration of the aged.”
Later studies have since reinforced the link between people’s expectations and their physical ageing, while dismissing some of the more obvious – and less interesting – explanations. You might expect that people’s attitudes would reflect their decline rather than contribute to the degeneration, for example. Yet many people will endorse certain ageist beliefs, such as the idea that “old people are helpless”, long before they should have started experiencing age-related disability themselves. And Levy has found that those kinds of views, expressed in people’s mid-30s, can predict their subsequent risk of cardiovascular disease up to 38 years later.
The most recent findings suggest that age beliefs may play a key role in the development of Alzheimer’s disease. Tracking 4,765 participants over four years, the researchers found that positive expectations of ageing halved the risk of developing the disease, compared to those who saw old age as an inevitable period of decline. Astonishingly, this was even true of people who carried a harmful variant of the APOE gene, which is known to render people more susceptible to the disease. The positive mindset can counteract an inherited misfortune, protecting against the build-up of the toxic plaques and neuronal loss that characterise the disease.
How could this be?
Behaviour is undoubtedly important. If you associate age with frailty and disability, you may be less likely to exercise as you get older and that lack of activity is certainly going to increase your predisposition to many illnesses, including heart disease and Alzheimer’s.
Importantly, however, our age beliefs can also have a direct effect on our physiology. Elderly people who have been primed with negative age stereotypes tend to have higher systolic blood pressure in response to challenges, while those who have seen positive stereotypes demonstrate a more muted reaction. This makes sense: if you believe that you are frail and helpless, small difficulties will start to feel more threatening. Over the long term, this heightened stress response increases levels of the hormone cortisol and bodily inflammation, which could both raise the risk of ill health.
The consequences can even be seen within the nuclei of the individual cells, where our genetic blueprint is stored. Our genes are wrapped tightly in each cell’s chromosomes, which have tiny protective caps, called telomeres, which keep the DNA stable and stop it from becoming frayed and damaged. Telomeres tend to shorten as we age and this reduces their protective abilities and can cause the cell to malfunction. In people with negative age beliefs, that process seems to be accelerated – their cells look biologically older. In those with the positive attitudes, it is much slower – their cells look younger.
For many scientists, the link between age beliefs and long-term health and longevity is practically beyond doubt. “It’s now very well established,” says Dr David Weiss, who studies the psychology of ageing at Martin-Luther University of Halle-Wittenberg in Germany. And it has critical implications for people of all generations.
Our culture is saturated with messages that reinforce the damaging age beliefs. Just consider greetings cards, which commonly play on of images depicting confused and forgetful older people. “The other day, I went to buy a happy 70th birthday card for a friend and I couldn’t find a single one that wasn’t a joke,” says Martha Boudreau, the chief communications officer of AARP, a special interest group (formerly known as the American Association of Retired Persons) that focuses on the issues of over-50s.
She would like to see greater awareness – and intolerance – of age stereotypes, in much the same way that people now show greater sensitivity to sexism and racism. “Celebrities, thought leaders and influencers need to step forward,” says Boudreau.
We could all, in other words, learn to live like Paddy Jones.
When I interviewed Jones, she was careful to emphasise the potential role of luck in her good health. But she agrees that many people have needlessly pessimistic views of their capabilities, over what could be their golden years, and encourages them to question the supposed limits. “If you feel there’s something you want to do, and it inspires you, try it!” she told me. “And if you find you can’t do it, then look for something else you can achieve.”
Whatever our current age, that’s surely a winning attitude that will set us up for greater health and happiness for decades to come.
This is an edited extract fromThe Expectation Effect: How your Mindset Can Transform Your Life by David Robson, published by Canongate on 6 January (£18.99).
From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer
by Stephen Law
Stephen Law is a philosopher and author. He is director of philosophy at the Department of Continuing Education at the University of Oxford, and editor of Think, the Royal Institute of Philosophy journal. He researches primarily in the fields of philosophy of religion, philosophy of mind, Ludwig Wittgenstein, and essentialism. His books for a popular audience include The Philosophy Gym (2003), The Complete Philosophy Files (2000) and Believing Bullshit (2011). He lives in Oxford.
Many people believe in extraordinary hidden beings, including demons, angels, spirits and gods. Plenty also believe in supernatural powers, including psychic abilities, faith healing and communication with the dead. Conspiracy theories are also popular, including that the Holocaust never happened and that the terrorist attacks on the United States of 11 September 2001 were an inside job. And, of course, many trust in alternative medicines such as homeopathy, the effectiveness of which seems to run contrary to our scientific understanding of how the world actually works.
Such beliefs are widely considered to be at the ‘weird’ end of the spectrum. But, of course, just because a belief involves something weird doesn’t mean it’s not true. As science keeps reminding us, reality often is weird. Quantum mechanics and black holes are very weird indeed. So, while ghosts might be weird, that’s no reason to dismiss belief in them out of hand.
I focus here on a particular kind of ‘weird’ belief: not only are these beliefs that concern the enticingly odd, they’re also beliefs that the general public finds particularly difficult to assess.
Almost everyone agrees that, when it comes to black holes, scientists are the relevant experts, and scientific investigation is the right way to go about establishing whether or not they exist. However, when it comes to ghosts, psychic powers or conspiracy theories, we often hold wildly divergent views not only about how reasonable such beliefs are, but also about what might count as strong evidence for or against them, and who the relevant authorities are.
Take homeopathy, for example. Is it reasonable to focus only on what scientists have to say? Shouldn’t we give at least as much weight to the testimony of the many people who claim to have benefitted from homeopathic treatment? While most scientists are sceptical about psychic abilities, what of the thousands of reports from people who claim to have received insights from psychics who could only have known what they did if they really do have some sort of psychic gift? To what extent can we even trust the supposed scientific ‘experts’? Might not the scientific community itself be part of a conspiracy to hide the truth about Area 51 in Nevada, Earth’s flatness or the 9/11 terrorist attacks being an inside job?
Most of us really struggle when it comes to assessing such ‘weird’ beliefs – myself included. Of course, we have our hunches about what’s most likely to be true. But when it comes to pinning down precisely why such beliefs are or aren’t reasonable, even the most intelligent and well educated of us can quickly find ourselves out of our depth. For example, while most would pooh-pooh belief in fairies, Arthur Conan Doyle, the creator of the quintessentially rational detective Sherlock Holmes, actually believed in them and wrote a book presenting what he thought was compelling evidence for their existence.
When it comes to weird beliefs, it’s important we avoid being closed-minded naysayers with our fingers in our ears, but it’s also crucial that we avoid being credulous fools. We want, as far as possible, to be reasonable.
I’m a philosopher who has spent a great deal of time thinking about the reasonableness of such ‘weird’ beliefs. Here I present five key pieces of advice that I hope will help you figure out for yourself what is and isn’t reasonable.
Let’s begin with an illustration of the kind of case that can so spectacularly divide opinion. In 1976, six workers reported a UFO over the site of a nuclear plant being constructed near the town of Apex, North Carolina. A security guard then reported a ‘strange object’. The police officer Ross Denson drove over to investigate and saw what he described as something ‘half the size of the Moon’ hanging over the plant. The police also took a call from local air traffic control about an unidentified blip on their radar.
The next night, the UFO appeared again. The deputy sheriff described ‘a large lighted object’. An auxiliary officer reported five lighted objects that appeared to be burning and about 20 times the size of a passing plane. The county magistrate described a rectangular football-field-sized object that looked like it was on fire.
Finally, the press got interested. Reporters from the Star newspaper drove over to investigate. They too saw the UFO. But when they tried to drive nearer, they discovered that, weirdly, no matter how fast they drove, they couldn’t get any closer.
This report, drawn from Philip J Klass’s bookUFOs: The Public Deceived (1983), is impressive: it involves multiple eyewitnesses, including police officers, journalists and even a magistrate. Their testimony is even backed up by hard evidence – that radar blip.
Surely, many would say, given all this evidence, it’s reasonable to believe there was at least something extraordinary floating over the site. Anyone who failed to believe at least that much would be excessively sceptical – one of those perpetual naysayers whose kneejerk reaction, no matter how strong the evidence, is always to pooh-pooh.
What’s most likely to be true: that there really was something extraordinary hanging over the power plant, or that the various eyewitnesses had somehow been deceived? Before we answer, here’s my first piece of advice.NEED TO KNOWTHINK IT THROUGHKEY POINTSWHY IT MATTERSLINKS & BOOKS
Think it through
1. Expect unexplained false sightings and huge coincidences
Our UFO story isn’t over yet. When the Star’s two-man investigative team couldn’t get any closer to the mysterious object, they eventually pulled over. The photographer took out his long lens to take a look: ‘Yep … that’s the planet Venus all right.’ It was later confirmed beyond any reasonable doubt that what all the witnesses had seen was just a planet. But what about that radar blip? It was a coincidence, perhaps caused by a flock of birds or unusual weather.
What moral should we draw from this case? Not, of course, that because this UFO report turned out to have a mundane explanation, all such reports can be similarly dismissed. But notice that, had the reporters not discovered the truth, this story would likely have gone down in the annals of ufology as one of the great unexplained cases. The moral I draw is that UFO cases that have multiple eyewitnesses and even independent hard evidence (the radar blip) may well crop up occasionally anyway, even if there are no alien craft in our skies.
We tend significantly to underestimate how prone to illusion and deception we are when it comes to the wacky and weird. In particular, we have a strong tendency to overdetect agency – to think we are witnessing a person, an alien or some other sort of creature or being – where in truth there’s none.
Psychologists have developed theories to account for this tendency to overdetect agency, including that we have evolved what’s called a hyperactive agency detecting device. Had our ancestors missed an agent – a sabre-toothed tiger or a rival, say – that might well have reduced their chances of surviving and reproducing. Believing an agent is present when it’s not, on the other hand, is likely to be far less costly. Consequently, we’ve evolved to err on the side of overdetection – often seeing agency where there is none. For example, when we observe a movement or pattern we can’t understand, such as the retrograde motion of a planet in the night sky, we’re likely to think the movement is explained by some hidden agent working behind the scenes (that Mars is actually a god, say).
One example of our tendency to overdetect agency is pareidolia: our tendency to find patterns – and, in particular, faces – in random noise. Stare at passing clouds or into the embers of a fire, and it’s easy to interpret the randomly generated shapes we see as faces, often spooky ones, staring back.
And, of course, nature is occasionally going to throw up the face-like patterns just by chance. One famous illustration was produced in 1976 by the Mars probe Viking Orbiter 1. As the probe passed over the Cydonia region, it photographed what appeared to be an enormous, reptilian-looking face 800 feet high and nearly 2 miles long. Some believe this ‘face on Mars’ was a relic of an ancient Martian civilisation, a bit like the Great Sphinx of Giza in Egypt. A book called TheMonuments of Mars: A City on the Edge of Forever (1987) even speculated about this lost civilisation. However, later photos revealed the ‘face’ to be just a hill that looks face-like when lit a certain way. Take enough photos of Mars, and some will reveal face-like features just by chance.
The fact is, we should expect huge coincidences. Millions of pieces of bread are toasted each morning. One or two will exhibit face-like patterns just by chance, even without divine intervention. One such piece of toast that was said to show the face of the Virgin Mary (how do we know what she looked like?) was sold for $28,000. We think about so many people each day that eventually we’ll think about someone, the phone will ring, and it will be them. That’s to be expected, even if we’re not psychic. Yet many put down such coincidences to supernatural powers.
2. Understand what strong evidence actually is
When is a claim strongly confirmed by a piece of evidence? The following principle appears correct (it captures part of what confirmation theorists call the Bayes factor; for more on Bayesian approaches to assessing evidence, see the link at the end):
Evidence confirms a claim to the extent that the evidence is more likely if the claim is true than if it’s false.
Here’s a simple illustration. Suppose I’m in the basement and can’t see outside. Jane walks in with a wet coat and umbrella and tells me it’s raining. That’s pretty strong evidence it’s raining. Why? Well, it is of course possible that Jane is playing a prank on me with her wet coat and brolly. But it’s far more likely she would appear with a wet coat and umbrella and tell me it’s raining if that’s true than if it’s false. In fact, given just this new evidence, it may well be reasonable for me to believe it’s raining.
Here’s another example. Sometimes whales and dolphins are found with atavistic limbs – leg-like structures – where legs would be found on land mammals. These discoveries strongly confirm the theory that whales and dolphins evolved from earlier limbed, land-dwelling species. Why? Because, while atavistic limbs aren’t probable given the truth of that theory, they’re still far more probable than they would be if whales and dolphins weren’t the descendants of such limbed creatures.
The Mars face, on the other hand, provides an example of weak or non-existent evidence. Yes, if there was an ancient Martian civilisation, then we might discover what appeared to be a huge face built on the surface of the planet. However, given pareidolia and the likelihood of face-like features being thrown up by chance, it’s about as likely that we would find such face-like features anyway, even if there were no alien civilisation. That’s why such features fail to provide strong evidence for such a civilisation.
So now consider our report of the UFO hanging over the nuclear power construction site. Are several such cases involving multiple witnesses and backed up by some hard evidence (eg, a radar blip) good evidence that there are alien craft in our skies? No. We should expect such hard-to-explain reports anyway, whether or not we’re visited by aliens. In which case, such reports are not strong evidence of alien visitors.
Being sceptical about such reports of alien craft, ghosts or fairies is not knee-jerk, fingers-in-our-ears naysaying. It’s just recognising that, though we might not be able to explain the reports, they’re likely to crop up occasionally anyway, whether or not alien visitors, ghosts or fairies actually exist. Consequently, they fail to provide strong evidence for such beings.
It was the scientist Carl Sagan who in 1980 said: ‘Extraordinary claims require extraordinary evidence.’ By an ‘extraordinary’ claim, Sagan appears to have meant an extraordinarily improbable claim, such as that Alice can fly by flapping her arms, or that she can move objects with her mind. On Sagan’s view, such claims require extraordinarily strong evidence before we should accept them – much stronger than the evidence required to support a far less improbable claim.
Suppose for example that Fred claims Alice visited him last night, sat on his sofa and drank a cup of tea. Ordinarily, we would just take Fred’s word for that. But suppose Fred adds that, during her visit, Alice flew around the room by flapping her arms. Of course, we’re not going to just take Fred’s word for that. It’s an extraordinary claim requiring extraordinary evidence.
If we’re starting from a very low base, probability-wise, then much more heavy lifting needs to be done by the evidence to raise the probability of the claim to a point where it might be reasonable to believe it. Clearly, Fred’s testimony about Alice flying around the room is not nearly strong enough.
Similarly, given the low prior probability of the claims that someone communicated with a dead relative, or has fairies living in their local wood, or has miraculously raised someone from the dead, or can move physical objects with their mind, we should similarly set the evidential bar much higher than we would for more mundane claims.
4. Beware accumulated anecdotes
Once we’ve formed an opinion, it can be tempting to notice only evidence that supports it and to ignore the rest. Psychologists call this tendency confirmation bias.
For example, suppose Simon claims a psychic ability to know the future. He can provide 100 examples of his predictions coming true, including one or two dramatic examples. In fact, Simon once predicted that a certain celebrity would die within 12 months, and they did!
Do these 100 examples provide us with strong evidence that Simon really does have some sort of psychic ability? Not if Simon actually made many thousands of predictions and most didn’t come true. Still, if we count only Simon’s ‘hits’ and ignore his ‘misses’, it’s easy to create the impression that he has some sort of ‘gift’.
Confirmation bias can also create the false impression that a therapy is effective. A long list of anecdotes about patients whose condition improved after a faith healing session can seem impressive. People may say: ‘Look at all this evidence! Clearly this therapy has some benefits!’ But the truth is that such accumulated anecdotes are usually largely worthless as evidence.
It’s also worth remembering that such stories are in any case often dubious. For example, they can be generated by the power of suggestion: tell people that a treatment will improve their condition, and many will report that it has, even if the treatment actually offers no genuine medical benefit.
Impressive anecdotes can also be generated by means of a little creative interpretation. Many believe that the 16th-century seer Nostradamus predicted many important historical events, from the Great Fire of London to the assassination of John F Kennedy. However, because Nostradamus’s prophecies are so vague, nobody was able to use his writings to predict any of these events before they occurred. Rather, his texts were later creatively interpreted to fit what subsequently happened. But that sort of ‘fit’ can be achieved whether Nostradamus had extraordinary abilities or not. In which case, as we saw under point 2 above, the ‘fit’ is not strong evidence of such abilities.
5. Beware ‘But it fits!’
Often, when we’re presented with strong evidence that our belief is false, we can easily change our mind. Show me I’m mistaken in believing that the Matterhorn is near Chamonix, and I’ll just drop that belief.
However, abandoning a belief isn’t always so easy. That’s particularly the case for beliefs in which we have invested a great deal emotionally, socially and/or financially. When it comes to religious and political beliefs, for example, or beliefs about the character of our close relatives, we can find it extraordinarily difficult to change our minds. Psychologists refer to the discomfort we feel in such situations – when our beliefs or attitudes are in conflict – as cognitive dissonance.
Perhaps the most obvious strategy we can employ when a belief in which we have invested a great deal is threatened is to start explaining away the evidence.
Here’s an example. Dave believes dogs are spies from the planet Venus – that dogs are Venusian imposters on Earth sending secret reports back to Venus in preparation for their imminent invasion of our planet. Dave’s friends present him with a great deal of evidence that he’s mistaken. But, given a little ingenuity, Dave finds he can always explain away that evidence:
‘Dave, dogs can’t even speak – how can they communicate with Venus?’
‘They can speak, they just hide their linguistic ability from us.’
‘But Dave, dogs don’t have transmitters by which they could relay their messages to Venus – we’ve searched their baskets: nothing there!’
‘Their transmitters are hidden in their brain!’
‘But we’ve X-rayed this dog’s brain – no transmitter!’
‘The transmitters are made from organic material indistinguishable from ordinary brain stuff.’
‘But we can’t detect any signals coming from dogs’ heads.’
‘This is advanced alien technology – beyond our ability to detect it!’
‘Look Dave, Venus can’t support dog life – it’s incredibly hot and swathed in clouds of acid.’
‘The dogs live in deep underground bunkers to protect them. Why do you think they want to leave Venus?!’
You can see how this conversation might continue ad infinitum. No matter how much evidence is presented to Dave, it’s always possible for him to cook up another explanation. And so he can continue to insist his belief is logicallyconsistent with the evidence.
But, of course, despite the possibility of his endlessly explaining away any and all counterevidence, Dave’s belief is absurd. It’s certainly not confirmed by the available evidence about dogs. In fact, it’s powerfully disconfirmed.
The moral is: showing that your theory can be made to ‘fit’ – be consistent with – the evidence is not the same thing as showing your theory is confirmed by the evidence. However, those who hold weird beliefs often muddle consistency and confirmation.
Take young-Earth creationists, for example. They believe in the literal truth of the Biblical account of creation: that the entire Universe is under 10,000 years old, with all species being created as described in the Book of Genesis.
Polls indicate that a third or more of US citizens believe that the Universe is less than 10,000 years old. Of course, there’s a mountain of evidence against the belief. However, its proponents are adept at explaining away that evidence.
Take the fossil record embedded in sedimentary layers revealing that today’s species evolved from earlier species over many millions of years. Many young-Earth creationists explain away this record as a result of the Biblical flood, which they suppose drowned and then buried living things in huge mud deposits. The particular ordering of the fossils is supposedly accounted for by different ecological zones being submerged one after the other, starting with simple marine life. Take a look at the Answers in Genesis website developed by the Bible literalist Ken Ham, and you’ll discover how a great deal of other evidence for evolution and a billions-of-years-old Universe is similarly explained away. Ham believes that, by explaining away the evidence against young-Earth creationism in this way, he can show that his theory ‘fits’ – and so is scientifically confirmed by – that evidence:
Increasing numbers of scientists are realising that when you take the Bible as your basis and build your models of science and history upon it, all the evidence from the living animals and plants, the fossils, and the cultures fits. This confirms that the Bible really is the Word of God and can be trusted totally. [my italics]
According to Ham, young-Earth creationists and evolutionists do the same thing: they look for ways to make the evidence fit the theory to which they have already committed themselves:
Evolutionists have their own framework … into which they try to fit the data. [my italics]
But, of course, scientists haven’t just found ways of showing how the theory of evolution can be made consistent with the evidence. As we saw above, that theory really is strongly confirmed by the evidence.
Any theory, no matter how absurd, can, with sufficient ingenuity be made to ‘fit’ the evidence: even Dave’s theory that dogs are Venusian spies. That’s not to say it’s reasonable or well confirmed.
Of course, it’s not always unreasonable to explain away evidence. Given overwhelming evidence that water boils at 100 degrees Celsius at 1 atmosphere, a single experiment that appeared to contradict that claim might reasonably be explained away as a result of some unidentified experimental error. But as we increasingly come to rely on explaining away evidence in order to try to convince ourselves of the reasonableness of our belief, we begin to drift into delusion.
Key points – How to think about weird things
Expect unexplained false sightings and huge coincidences. Reports of mysterious and extraordinary hidden agents – such as angels, demons, spirits and gods – are to be expected, whether or not such beings exist. Huge coincidences – such as a piece of toast looking very face-like – are also more or less inevitable.
Understand what strong evidence is. If the alleged evidence for a belief is scarcely more likely if the belief is true than if it’s false, then it’s not strong evidence.
Extraordinary claims require extraordinary evidence. If a claim is extraordinarily improbable – eg, the claim that Alice flew round the room by flapping her arms – much stronger evidence is required for reasonable belief than is required for belief in a more mundane claim, such as that Alice drank a cup of tea.
Beware accumulated anecdotes. A large number of reports of, say, people recovering after taking an alternative medicine or visiting a faith healer is not strong evidence that such treatments actually work.
Beware ‘But it fits!’ Any theory, no matter how ludicrous (even the theory that dogs are spies from Venus), can, with sufficient ingenuity, always be made logically consistent with the evidence. That’s not to say it’s confirmed by the evidence.
Why it matters
Sometimes, belief in weird things is pretty harmless. What does it matter if Mary believes there are fairies at the bottom of her garden, or Joe thinks his dead aunty visits him occasionally? What does it matter if Sally is a closed-minded naysayer when it comes to belief in psychic powers? However, many of these beliefs have serious consequences.
Clearly, people can be exploited. Grieving parents contact spiritualists who offer to put them in contact with their dead children. Peddlers of alternative medicine and faith healing charge exorbitant fees for their ‘cures’ for terminal illnesses. If some alternative medicines really work, casually dismissing them out of hand and refusing to properly consider the evidence could also cost lives.
Lives have certainly been lost. Many have died who might have been saved because they believed they should reject conventional medicine and opted for ineffective alternatives.
Huge amounts of money are often also at stake when it comes to weird beliefs. Psychic reading and astrology are huge businesses with turnovers of billions of dollars per year. Often, it’s the most desperate who will turn to such businesses for advice. Are they, in reality, throwing their money away?
Many ‘weird’ beliefs also have huge social and political implications. The former US president Ronald Reagan and his wife Nancy were reported to have consulted an astrologer before making any major political decision. Conspiracy theories such as QAnon and the Sandy Hook hoax shape our current political landscape and feed extremist political thinking. Mainstream religions are often committed to miracles and gods.
In short, when it comes to belief in weird things, the stakes can be very high indeed. It matters that we don’t delude ourselves into thinking we’re being reasonable when we’re not.
Links & books
The Atlanticarticle ‘The Cognitive Biases Tricking Your Brain’ (2018) by Ben Yagoda provides a great introduction to thinking that can lead us astray, including confirmation bias.
The UK-based magazineThe Skeptic provides some high-quality free articles on belief in weird things. Well worth a subscription.
The Skeptical Inquirermagazine in the US is also excellent, and provides some free content.
The RationalWiki portal provides many excellent articles on pseudoscience.
The British mathematician Norman Fenton, professor of risk information management at Queen Mary University of London, provides a brief online introduction to Bayesian approaches to assessing evidence.
My bookBelieving Bullshit: How Not to Get Sucked into an Intellectual Black Hole (2011) identifies eight tricks of the trade that can turn flaky ideas into psychological flytraps – and how to avoid them.
The textbookHow to Think About Weird Things: Critical Thinking for a New Age (2019, 8th ed) by the philosophers Theodore Schick and Lewis Vaughn, offers step-by-step advice on sorting through reasons, evaluating evidence and judging the veracity of a claim.
The bookCritical Thinking (2017) by Tom Chatfield offers a toolkit for what he calls ‘being reasonable in an unreasonable world’.
In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?
In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’
Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:
Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?
Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.
Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.
Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.
Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.
A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.
This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’
An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.
The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’
Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:
Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.
But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’
Self-awareness is something that can be cultivated
Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.
Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.
These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.
Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.
We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.
This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.
This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.
Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?
Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.
A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.
There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.
Clear overlap between brain activations involved in metacognition and mindreading was observed
So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.
Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.
At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.
Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.
Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recentstudies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.
Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.
A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.
A simple route for improving self-awareness is to take a third-person perspective on ourselves
Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.
There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:
O wad some Power the giftie gie us To see oursels as ithers see us! It wad frae mony a blunder free us…
(Oh, would some Power give us the gift To see ourselves as others see us! It would from many a blunder free us… )
The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.
Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.
As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific.
IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.
A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states.
IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole.
Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)
Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.
Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems.
The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres.
Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious.
Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something.
Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?
Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).
In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious.
This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice.
Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.
Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.
As negociações e os conflitos que decorrem delas fazem parte da vida em sociedade e, ainda que a prática da negociação seja mais fortemente associada ao ambiente empresarial, ela está presente em delicadas questões geopolíticas, impasses orçamentários em empresas e mesmo em questões domésticas e relações familiares e corporativas.
Para o psicólogo Daniel Shapiro, fundador do Programa Internacional de Negociação da Universidade de Harvard, nos Estados Unidos, o que essas situações têm em comum é a alta carga emocional, que despertam reações inconscientes que inviabilizam avanços.
A experiência na mediação de conflitos –Shapiro trabalhou em negociações entre a China continental e Taiwan, em diferentes partes da África, no conflito israelo-palestino e na Europa central na virada do comunismo para o capitalismo– levou ao aprimoramento dessa metodologia que batizou de “teoria da identidade relacional”.
Shapiro diz ter identificado as cinco principais tentações —vertigem, compulsão à repetição, tabus, ataque ao sagrado e política de identidade— a partir da observação de um exercício repetido por ele em diversos ambientes e com diferentes audiências.
Na simulação, o grupo é estimulado a se agrupar em tribos com valores comuns —uma identidade é forjada entre eles. Organizados esses grupos, um alienígena invade a sala e faz uma única exigência: “escolham um único líder para representá-lo ou destruirei o mundo.” Invariavelmente, diz Shapiro, o mundo explode.
A primeira dessas experiências foi no Estado da Macedônia, sob tensão e iminente conflito étnico, ainda nos anos 1990. Naquele que talvez seja o mais curioso, “o mundo explodiu em Davos” —45 líderes mundiais, entre políticos e grandes executivos, foram convidados a participar da dinâmica.
A racionalidade, ou pressupor que os lados estejam sendo racionais no diálogo, segundo ele, não é suficiente. “Supomos que cada lado realmente tenha uma preocupação racional e use processos racionais para satisfazer essas preocupações. Quando se trata desses conflitos de grande carga emocional, não é suficiente.”
Segundo Shapiro, é a identidade o elemento invisível que, em conflitos de grande carga emocional, acaba ativando o que ele chama de efeito das tribos, um estado mental de alerta e polarização.
Com a pandemia, diz o psicólogo, estamos todos sob intenso estresse, o que torna as chances de as negociações terminarem em um acordo ainda menores. “É como reunir grupos de identidades distintas para negociar em uma sala escura e apertada.”
Veja os principais trechos da entrevista concedida à Folha por videoconferência.
Novo livro O trabalho que faço é em negociação e solução de conflitos. Estou nessa área há 30 e poucos anos —e minha calvície é a prova disso. Tenho observado como as pessoas negociam, o que funciona e o que não funciona nos mais significativos conflitos de nossas vidas, sejam profundamente políticos, ou desafios diários carregados de emoção.
O que descobri é que as pessoas tendem a negociar em um nível irracional. Supomos que cada lado realmente tenha uma preocupação racional e use processos racionais para satisfazer essas preocupações. Quando se trata desses conflitos de grande carga emocional, a racionalidade não é suficiente.
Como podemos entender e lidar com as dimensões mais profundas desses conflitos? Então, “Negociando o Inegociável” é uma forma de explorar essas dimensões do conflito que levam à polarização e como podemos entender mais profundamente esses conflitos –e que eu coloco na categoria da identidade.
Existem tantos bons livros por aí que oferecem soluções rápidas, que oferecem respostas fáceis para lidar com problemas difíceis. Mas para realmente chegar à raiz dos problemas que enfrentamos em nossas sociedades e em nossas vidas, precisamos compreender profundamente essas dimensões. O papel da emoção, o papel da identidade e como eles nos afetam e como os afetamos para uma mudança construtiva.
As cinco tentações Por que as sociedades e as famílias estão polarizadas? Mesmo quando se trata de um enorme custo emocional para pais, filhos, avós, por que fazemos isso? Essa é a questão sobre a qual venho pensando há muitos anos.
Em “Negociando o Inegociável”, discuto os cinco principais instigadores da mentalidade da tribo, que chamo de tentações. Eles são os instigadores de conflitos. No momento em que nossa identidade parece ameaçada, essas tentações começam a nos tentar em direção à mente tribal.
Desenvolvendo o método Assistindo àquele exercício da tribo, e o mundo explodindo de novo e de novo, me pergunto o que está acontecendo. Como essas pessoas, os líderes mais racionais do mundo, pessoas afetivas e amorosas estão explodindo o mundo de novo. Por quê? Cinco tentações, cinco iscas.
Vertigem Primeiro, entramos rapidamente no que chamo de vertigem. A ideia aqui é que, se entrarmos em um conflito com meu cônjuge, muito rapidamente podemos ser consumidos nesse conflito e uma discussão de dois minutos sobre quem deveria ter lavado a louça se transforma em duas horas de discussão horrível. Estamos em vertigem.
As polarizações dos Estados Unidos estão indiscutivelmente em um lugar de vertigem, consumidos no conflito, pensando estritamente sobre isso não podemos ver o quadro geral, estamos presos em nosso próprio pequeno lugar. Estamos presos na vertigem.
A palavra significa tontura, e essa é a experiência aqui. De repente, estou consumido pela vertigem, perco a noção de tempo e espaço e minha compreensão de tudo some. Parece sair daqui, um dos caminhos é perguntar “qual é o meu propósito nesse conflito?”.
Compulsão à repetição Como seres humanos, tendemos a repetir os mesmos padrões de comportamento disfuncionais de novo e de novo e de novo. Seja no sistema familiar, todos podemos prever os conflitos que teremos no domingo à noite, com a esposa ou filhos. Sabemos o que cada pessoa vai dizer, sabemos o que elas vão fazer e como vamos nos sentir depois. É uma compulsão à repetição.
E também somos muito bons em prever, por meio de intuição e palpite, em um nível nacional ou local, “opa, lá vamos nós de novo”. E esse é um daqueles momentos em que um grupo vai dizer isso, o outro vai dizer aquilo, e começarão apontar o dedo um para o outro e vai levar à violência.
O problema como essa repetição é que esse padrão vira parte da nossa identidade e vira uma tatuagem. É muito difícil de se livrar disso, é mais que um simples hábito, é muito mais profundo. A saída para a compulsão à repetição é tomar ciência disso e realmente olhar para esse padrão em que eu costumo entrar com aquela pessoa. Por que e como eu posso me livrar disso.
Tabus O que, na sociedade brasileira, é um tabu de se conversar? Se você falar de certo assunto, vai ter muitos problemas, será punido, rejeitado socialmente e corre o risco de ser agredido fisicamente.
Uma coisa que é um tabu nos Estados Unidos é um apoiador do Trump e um do Biden sentarem lado a lado para uma refeição. Como se um lado fosse contaminar o outro. O medo não é infundado.
Acho que, no momento em que uma pessoa de um lado olha alguém de sua tribo se associar com alguém da tribo oposta, bem, um tabu da associação acaba de ser quebrado. “Você está traindo nossa tribo? Está traindo nossa gente? Não fale com eles!” Bem, mas como você pode solucionar um conflito, como você reduz as polarizações se os lados não conversam.
Ataque ao sagrado Toda tribo política, toda família tem crenças, valores que considera sagrados. Se você sente que eu a ofendi ou ameacei aquilo que para você tem grande importância, religioso ou secular, bum, voltamos à mentalidade tribal.
A imagem na minha cabeça é a de uma cobra que salta sobre você, quando você ofende algo muito sagrado. Também penso que no contexto moderno, politicamente as pessoas podem tornar alguns assuntos sagrados, e são eles que constroem as tribos internas e tornam mais difícil a reconciliação com outros grupos.
Para mim, política de identidade é o uso ou o mau uso da identidade para atingir certos objetivos políticos. Quando um líder diz “nós temos que nos unir para melhorar o nosso país.” Temos que nos unir, mas quem é esse “nós” a que ele se refere? Na maior parte das vezes, não vale para todo mundo, vale somente para alguns grupos, e cria, com frequência de maneira explícita, uma dinâmica de nós contra eles. Nós temos que nos unir para brigar como eles. Políticas de identidade podem ser usadas para dividir.
Meu conselho: em uma democracia funcional, use política de identidade para unir. Foque em um “nós” mais amplo. Todos nós juntos. Sim, temos tribos menores com interesses políticos. Ótimo. E todos somos parte de um projeto maior. Você pode fazer o que Mandela fez, você pode unir, usar essas políticas de identidade para unir.
O efeito da tribo A pandemia colocou muito estresse em todos. Ansiedade, a dor de perder um parente, perder alguém próximo –e meu coração está com o Brasil, sei que sofreu muito. Acho que isso é um fardo, um peso emocional que está sobre os ombros de todos.
Enquanto isso, devemos fazer tudo o que fazíamos antes, agora no Zoom. Todos esses fardos emocionais nos fazem, em parte, querer buscar algum tipo de segurança emocional e ela pode vir na forma de nos aproximarmos de um grupo ao qual sinto que pertenço. Para uma tribo.
O problema é que as tensões também começam a aparecer entre os diferentes grupos, que estão muito mais comprimidos agora, estão mais apertados, por conta da pandemia.
Sob pressão Acho que a pandemia está tendo um grande impacto na busca de segurança. A tribo é uma forma de segurança, mas pode facilmente se transformar em uma mentalidade tribal de “nós contra eles”.
A pandemia comprimiu todos nós. É muito mais difícil passar pelo desagradável. E estar emocionalmente desconfortável é como estar em um campo aberto com seus arquiinimigos. Estamos em um ambiente comprimido e todos esses instintos tribais são ativados com muito mais facilidade.
Quando há hierarquia O poder é muito maleável. Acho que existem dezenas de fontes de poder. Algumas pessoas têm mais poder hierárquico. Se eu tenho dinheiro, as pessoas podem atender o telefone com mais frequência.
Mas, em uma negociação, é útil tentar descobrir quais são minhas fontes de poder. Se não chegarmos a um acordo, o que posso fazer? Há poder, por exemplo, em entender os interesses do outro lado. Se estou negociando com você uma nova política para a empresa e precisamos que o presidente diga sim, que o vice-presidente aceite. Quanto mais eu entendo os interesses dos outros, mais poderoso sou.
Saúde mental no trabalho Um dos conceitos mais fundamentais, em minha opinião, é o poder da apreciação. Apreciação, da forma como uso, não significa apenas dizer obrigado, não é gratidão. O que quero dizer é um entendimento profundo daquilo que a outra pessoa vive.
Enquanto passamos pela pandemia, acredito que, como seres humanos, precisamos de mais apreço do que nunca. Então, no local de trabalho, talvez a liderança executiva possa encontrar maneiras de ser um pouco mais gentil com a força de trabalho.
Não significa que as expectativas são menores, mas que o apoio é maior. É dizer ‘estamos aqui para você emocionalmente, nos preocupamos com você. Você não é apenas um objeto que está produzindo dinheiro para nós, é um ser humano que valorizamos’.
Vejo executivos agindo como ‘estou no comando, vou falar e mostrar o quanto sou inteligente’. A pandemia exige muito mais ouvir.
Ouvir mais, falar menos É comum que igualemos negociações a conversas. Nós as chamamos de conversas de negociação [negotiation talks]. Uma terminologia muito melhor seria chamá-las de “ouvintes de negociação” [negotiation listens].
Porque você vai liderar muito mais a negociação se ouvir. Se você escuta 80% do tempo, e fala 20% do tempo e vice-versa.
Penso isso também sobre minha própria vida. Sabe, encontre tempo e espaço para dar um passeio, sentar-se em silêncio por dez minutos e absorver o que está acontecendo, reconhecer suas emoções.
E a razão para isso é que serei muito mais eficaz em minha vida pessoal e profissional se entendo o luto, os sentimentos, ressentimentos ou o desejo por mais amor e conexão. Quanto mais estou ciente de minha própria experiência interna, mais posso realmente interagir com os outros. Estar atento e ouvir é mesmo muito importante.
Negociando o Inegociável: Como resolver conflitos que parecem impossíveis
Preço A partir de R$ 44,92 na versão impressa | R$ 39,90 na versão e-book
Social and psychological forces are combining to make the sharing and believing of misinformation an endemic problem with no easy solution.
Published May 7, 2021; Updated May 13, 2021
There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.
All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.
We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.
“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.
It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”
Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.
As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.
This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.
Framing everything as a grand conflict against scheming enemies can feel enormously reassuring. And that’s why perhaps the greatest culprit of our era of misinformation may be, more than any one particular misinformer, the era-defining rise in social polarization.
“At the mass level, greater partisan divisions in social identity are generating intense hostility toward opposition partisans,” which has “seemingly increased the political system’s vulnerability to partisan misinformation,” Dr. Nyhan wrote in an earlier paper.
Growing hostility between the two halves of America feeds social distrust, which makes people more prone to rumor and falsehood. It also makes people cling much more tightly to their partisan identities. And once our brains switch into “identity-based conflict” mode, we become desperately hungry for information that will affirm that sense of us versus them, and much less concerned about things like truth or accuracy.
In an email, Dr. Nyhan said it could be methodologically difficult to nail down the precise relationship between overall polarization in society and overall misinformation, but there is abundant evidence that an individual with more polarized views becomes more prone to believing falsehoods.
The second driver of the misinformation era is the emergence of high-profile political figures who encourage their followers to indulge their desire for identity-affirming misinformation. After all, an atmosphere of all-out political conflict often benefits those leaders, at least in the short term, by rallying people behind them.
Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.
“Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.
“When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.
How do we fight disinformation? Join Times tech reporters as they untangle the roots of disinformation and how to combat it. Plus we speak to special guest comedian Sarah Silverman. R.S.V.P. to this subscriber-exclusive event.
“Depending on the platform, especially, humans are very sensitive to social reward,” he said. Research demonstrates that people who get positive feedback for posting inflammatory or false statements become much more likely to do so again in the future. “You are affected by that.”
In 2016, the media scholars Jieun Shin and Kjerstin Thorson analyzed a data set of 300 million tweets from the 2012 election. Twitter users, they found, “selectively share fact-checking messages that cheerlead their own candidate and denigrate the opposing party’s candidate.” And when users encountered a fact-check that revealed their candidate had gotten something wrong, their response wasn’t to get mad at the politician for lying. It was to attack the fact checkers.
“We have found that Twitter users tend to retweet to show approval, argue, gain attention and entertain,” researcher Jon-Patrick Allem wrote last year, summarizing a study he had co-authored. “Truthfulness of a post or accuracy of a claim was not an identified motivation for retweeting.”
In another study, published last month in Nature, a team of psychologists tracked thousands of users interacting with false information. Republican test subjects who were shown a false headline about migrants trying to enter the United States (“Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”) mostly identified it as false; only 16 percent called it accurate. But if the experimenters instead asked the subjects to decide whether to share the headline, 51 percent said they would.
“Most people do not want to spread misinformation,” the study’s authors wrote. “But the social media context focuses their attention on factors other than truth and accuracy.”
In a highly polarized society like today’s United States — or, for that matter, India or parts of Europe — those incentives pull heavily toward ingroup solidarity and outgroup derogation. They do not much favor consensus reality or abstract ideals of accuracy.
As people become more prone to misinformation, opportunists and charlatans are also getting better at exploiting this. That can mean tear-it-all-down populists who rise on promises to smash the establishment and control minorities. It can also mean government agencies or freelance hacker groups stirring up social divisions abroad for their benefit. But the roots of the crisis go deeper.
“The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”
In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”
Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.
It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.
As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”
The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.
In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.
Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.
Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.
Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”
In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.
He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.
I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?
But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”
In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.
Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.
His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.
Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.
Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.
Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.
News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.
Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.
They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)
“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”
Joaquin Quiñonero Candela
In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.
Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.
Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”
With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.
If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.
But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”
While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.
“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”
A former AI researcher who joined in 2018
In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.
Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)
But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.
One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”
Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.
That former employee, meanwhile, no longer lets his daughter use Facebook.
Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.
It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.
At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.
Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.
Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.
“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”
At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.
The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.
And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.
Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.
On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.
For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.
Facebook did not grant me an interview with Zuckerberg, but previousreporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.
On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.
It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.
This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.
(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)
But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.
Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.
But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.
The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.
Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it canpersist on the platform.
In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.
Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”
But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.
A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.
“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”
When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.
“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”
Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.
Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.
But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.
This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.
In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.
All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
Ellery Roberts Biddle, editorial director of Ranking Digital Rights
This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.
Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.
Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.
Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”
The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.
In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.
I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”
Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.
I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”
Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.
I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.
“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”
Corrections:We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.
When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.
One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.
The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.
This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.
Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.
The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.
Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.
The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.
Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.
One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.
The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.
But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefitsoverwhelmthe ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predictedthat wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmedexactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.
Rules in Place of Mechanisms and Intuitions
Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.
In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.
All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.
Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.
Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.
After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.
Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.
Outdoors? It’s the opposite.
There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.
And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.
We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.
Scolding and Shaming
Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.
Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?
On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.
While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditionsnecessary for more people to keep themselves safe.
And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.
Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.
As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””
Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.
Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.
And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.
The Balance Between Knowledge And Action
Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.
Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.
Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.
And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.
At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.
Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.
Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”
Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.
As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.
Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.
These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.
And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.
Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.
Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.
So why isn’t this story more widely appreciated?
Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.
Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”
So we didn’t get our initial vaccine jubilation.
But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.
People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.
Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.
As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.
What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.
A year into the pandemic, we’re still repeating the same mistakes.
The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.
Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.
An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.
You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.
There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.
And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.
The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.
Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.
The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.
Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associatedwith higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.
Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.
For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.
New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”
Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.
Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.
But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.
Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.
Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.
And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.
But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.
Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.
Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.
We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.
Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”
Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.
Cambridge University team say their findings could be used to spot people at risk from radicalisation
Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.
Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.
The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.
The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.
A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.
“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.
She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”
Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.
“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.
In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.
“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”
The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.
The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.
“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”
As mágoas – as suas ou aquelas que outros lhe causam – mantêm você preso. A terapia do perdão pode ajudá-lo a mudar de perspectiva e seguir adiante com a sua vida
Nathaniel Wade – 14 de agosto de 2020
Quando eu tinha 26 anos, meu mundo desmoronou. Eu tinha acabado de começar a pós-graduação e viajava constante entre Richmond, Virgínia e Washington, DC, porque minha esposa estava terminando sua pós-graduação em uma cidade diferente de onde eu estudava. Em uma dessas viagens, eu estava lavando roupa e encontrei um bilhete amassado no fundo da secadora. Estava endereçado a minha esposa por um de seus colegas de classe: “Devemos sair em horários diferentes. Te encontro em minha casa mais tarde”.
Minha esposa estava tendo um caso, embora não tenha sido confirmado até meses depois. Para mim, foi um golpe de proporções monumentais. Eu me senti traído, enganado e até ridicularizado. A raiva explodiu em mim e, ao longo de dias e semanas, essa raiva se transformou em uma confusão fervilhante de amargura, confusão e descrença. Nós nos separamos sem um plano claro para o futuro.
Embora essa dor me apunhalasse com uma intensidade que eu nunca havia sentido, eu não era o único a passar por isso. Muitas pessoas experimentam dores semelhantes, e muito piores, em suas vidas. Estar em um relacionamento geralmente significa ser maltratado, magoado ou traído. Como pessoas, frequentemente sofremos injustiças e dificuldades de relacionamento. Uma das maneiras que os humanos desenvolveram para lidar com essa dor é por meio do perdão. Mas o que é perdão e como funciona?
Essas eram as questões nas quais eu estava trabalhando ao mesmo tempo em que passava por minha separação. Eu estava fazendo pós-graduação na Virginia Commonwealth University, e o psicólogo Everett Worthington era o meu orientador. Ev é um dos dois pioneiros na psicologia do perdão e, desde o primeiro dia, ele me fez explorar o perdão de uma perspectiva acadêmica (deixei seu escritório depois de nosso primeiro encontro com uma pilha de meio metro de artigos científicos para revisar). Desde então, tornei-me psicólogo e professor de aconselhamento psicológico na Iowa State University, com especialização em perdão como parte do processo de psicoterapia.
Os primeiros trabalhos produzidos por Worthington e por mim, e por outros pesquisadores, identificaram o que o perdão não era. Robert Enright, da Universidade de Wisconsin-Madison, outro pioneiro na psicologia do perdão, foi fundamental neste trabalho. Por exemplo, ele e seus colegas distinguiam entre perdoar e tolerar, desculpar ou ignorar uma ofensa. Para que o verdadeiro perdão ocorra, afirmaram, é necessário que haja uma verdadeira ofensa ou mágoa, com consequências reais. Uma boa ilustração pode ser a dos clientes que Enright e uma de suas alunas, Suzanne Freedman (agora professora da University of Northern Iowa), descreveram em um artigo: mulheres sobreviventes de incesto infantil. Para que o verdadeiro perdão ocorresse neste contexto, argumentavam, as mulheres precisavam primeiro reconhecer que uma mágoa real lhes fora infligida quando crianças. Negar sua própria dor ou ignorar a atrocidade não seria perdão. E, se viesse, o perdão só ocorreria depois de trabalhar a difícil realidade do que aconteceu. Ao longo de muitos meses e através de um trabalho pessoal desafiador, as mulheres do estudo resolveram grande parte do medo, amargura, raiva, confusão e mágoa, e alcançaram um nível notável de paz e resolução em relação aos abusos anteriores.
Outra questão principal que se tornou rapidamente aparente na pesquisa foi se a reconciliação precisava fazer parte do perdão ou não. Para acadêmicos e terapeutas como eu, interessados em ajudar as pessoas a obter o perdão por ofensas muitas vezes graves, como infidelidade conjugal ou violências do passado, o perdão é restrito a um processo interno. Assim, o perdão não inclui necessariamente a reconciliação, mas é o processo interno pelo qual alguém resolve a amargura e a mágoa e se move para algo mais positivo em relação à pessoa que o ofendeu, como empatia ou amor. Em contraste, a reconciliação é um processo pelo qual as pessoas restabelecem um relacionamento de confiança com alguém que as magoou. Essa distinção tornou-se fundamental em minha própria cura.
Embora esta distinção seja importante, não significa que a reconciliação não seja uma opção valiosa para aqueles de nós que vêem o perdão desta forma. Em vez disso, a reconciliação se torna um processo separado, independente do perdão, mas importante e valioso por si só. Isso foi um bálsamo considerável para mim nos meses que se seguiram à minha separação. Apesar da dor, raiva e confusão que ainda sentia meses depois, eu sabia que gostaria de buscar o perdão em algum momento no futuro. Eu não queria que minha amargura do passado contagiasse minha felicidade futura em relacionamentos amorosos. Eu não queria carregar esse fardo pelo resto da minha vida. Em vez disso, imaginei um momento em que gostaria de deixar isso de lado e seguir em frente. Meu verdadeiro medo, porém, era que, ao perdoar, eu necessariamente tivesse que me reconciliar com minha esposa ou, alternativamente, que se eu não quisesse me reconciliar, não me livraria da raiva. Ao ver o perdão como um processo separado da reconciliação, novas opções apareceram. Entendi então que poderia perdoar ou não, e poderia me reconciliar ou não.
Um processo semelhante ocorreu para muitos clientes com quem trabalhei. Por exemplo, lembro-me do alívio sensível que senti em um grupo de pessoas que estava tratando quando trouxe à tona a diferença entre perdão e reconciliação. Os membros desse grupo estavam lutando contra violências diversas, de serem financeiramente roubados por um ex a casos de traição e outras experiências negativas. Quando apresentei a possível distinção entre perdão e reconciliação e discutimos como isso poderia acontecer em suas próprias experiências, senti um suspiro coletivo. Houve um peso tirado dos ombros dos participantes simplesmente pelo entendimento de que perdoar não significa necessariamente reconciliar. Os membros do grupo sentiram-se mais livres e isso ajudou em seus processos de perdão de maneiras novas e ricas.
Por exemplo, Jo (nome fictício) estava sofrendo com um noivo que lhe roubou dez mil dólares e desapareceu. Obviamente, não havia maneira de Jo trabalhar na reconciliação, mesmo que ela quisesse, e ainda assim, com essa distinção, ela podia ver como ela ainda poderia seguir em frente com o perdão.
Por outro lado, Maria, que trabalhava para perdoar a filha adulta pelas coisas que a magoara, queria manter o relacionamento; ela estava muito interessada em reconciliação. Compreender a diferença ajudou-a a ver que ela poderia trabalhar tanto no perdão quanto na reconciliação de maneiras diferentes para ajudar a curar seu relacionamento com a filha.
Em suma, uma compreensão adequada parece ajudar as pessoas a aceitar o perdão e abre novas possibilidades de cura e crescimento. Mas como funciona e de que forma as pessoas podem usá-lo para seu próprio benefício?
Passei a maior parte da minha carreira acadêmica tentando responder a essa pergunta. Especificamente, estudei maneiras de ajudar as pessoas a perdoar os outros quando têm dificuldade para fazê-lo. A ciência sobre isso ainda é muito nova, mas parece haver um núcleo comum de intervenções que fornecem ajuda para que as pessoas caminhem em direção à resolução de suas feridas.
A primeira é uma estratégia testada e comprovada em quase todas as formas de psicoterapia: compartilhar a história pessoal em um ambiente seguro e sem julgamento. Quase todas as intervenções de perdão estabelecidas prescrevem um momento para compartilhar a mágoa ou ofensa. Isso é particularmente poderoso em um ambiente de grupo, no qual os participantes compartilham suas experiências diferentes uns com os outros, testemunham suas dores e se apoiam mutuamente. No entanto, contar a própria história de forma individual é também eficaz, em um contexto em que não se tenta dar conselhos, não se diminui a importância de sentimentos negativos e não se estimula a raiva (evitando reações como “sim, ele é a pior pessoa do mundo!”). Frequentemente, em nossos programas de perdão, os participantes nos dizem que uma das partes mais importantes e eficazes é a oportunidade de compartilhar com os outros o que lhes aconteceu. Afirmam que a parte mais útil costuma ser “saber que outros tiveram dificuldades semelhantes” e “ser capaz de desabafar, podendo dizer ali coisas que eu não poderiam ser ditas em outros lugares” e “sentir que foi ouvido, realmente compreendido e que poderia tirar isso do peito”.
Essa reação é compreensível, visto como pode ser difícil falarmos sobre momentos em que fomos magoados ou agredidos. Para alguns, é difícil compartilhar porque vítimas de violência em geral sentem vergonha e humilhação com a sua situação. Poucas pessoas querem compartilhar abertamente os momentos em que foram fracas ou maltratadas, traídas ou rejeitadas. São histórias de vulnerabilidade. Além da vergonha que as pessoas sentem, muitas vezes há o desejo de evitar a dor associada à mágoa: se eu compartilhar, terei que reviver a dor e talvez não seja capaz de lidar com isso. As intervenções que podem ajudar as pessoas a superar esses obstáculos, compartilhar sua dor e receber apoio podem ser de grande ajuda para ajudá-las a se recuperar.
Após uma recontagem completa da história, a maioria das intervenções oferece um tempo para as pessoas considerarem o ponto de vista do ofensor. O objetivo geralmente é ajudar as pessoas a desenvolver compreensão ou até empatia pela pessoa que as magoou. Existe um grande poder na empatia, ainda que existam também perigos envolvidos aí.
Três anos depois de encontrar aquele bilhete amassado, pedi o divórcio e segui em frente com um novo espírito de perdão
Quando bem feita, esta parte da intervenção ajuda as pessoas a expandirem sua perspectivas e ganharem nova consciência para as complexidades dos eventos que cercam suas feridas. Isso pode leva-las a uma visão mais ampla dos eventos, fazendo a ofensa parecer-se menos com uma maldade ou com sadismo e mais com uma situação complexa em que alguém tomou decisões prejudiciais ou ruins. Essa mudança de perspectiva e compreensão podem abrir as portas para o perdão. Um excelente exemplo disso é o trabalho de Frederic Luskin, diretor do Stanford Forgiveness Project, e do reverendo Byron Bland, capelão da Universidade de Palo Alto. Em 2000, eles reuniram protestantes e católicos da Irlanda do Norte que haviam perdido parentes devido à violência religiosa naquele país, e criaram um workshop de perdão de uma semana na Universidade de Stanford, na Califórnia. Grande parte dessa experiência foi ajudar cada grupo a ver o outro sob uma luz mais humana, a abandonar a amargura associada ao outro grupo e a alavancar a empatia para avançar em direção ao perdão. Como um participante que perdeu seu pai relatou: “Por anos eu tive ressentimento dos católicos, até vir para Stanford.”
É claro que, se feito de maneira inadequada ou sem precauções, tentar desenvolver empatia pode reduzir-se a culpar a vítima e encorajar aqueles que foram feridos a questionar ou minimizar seus sentimentos, permitindo que outros os magoem no futuro. A parte importante e difícil desse processo é ajudar as pessoas a manter a legitimidade de sua dor enquanto exploram outros pontos de vista. O objetivo é ajudar as pessoas a aceitarem seus sentimentos como compreensíveis e suas reações como justificadas, mesmo enquanto desenvolvem uma apreciação mais nuançada da perspectiva da pessoa ofensora. Isso leva tempo e muitas vezes não deve tentado até que um período considerável tenha decorrido desde a ofensa. A quantidade de tempo depende de muitos fatores, como a gravidade da mágoa e o relacionamento que se tem com a pessoa que o ofendeu.
Em minha própria jornada de perdão, foi de grande valia o compartilhamento da experiência e o desenvolvimento da empatia. Recebi ajuda considerável de vários parentes e amigos e de um terapeuta atencioso que ouviu minha história sem julgar o que eu deveria ou não fazer. Em vez disso, eles todos me ouviram, apoiaram-me em minha dor e permitiram que eu me expressasse livremente. Meu melhor amigo suportou o peso disso tudo. Tínhamos marcado uma viagem à praia no mesmo verão em que encontrei aquele bilhete para minha esposa. Eu a confrontei um pouco antes da viagem, e ela admitiu o caso pela primeira vez pouco antes de meu amigo e eu partirmos em nossa viagem. Passei dois dias na praia na Carolina do Norte vomitando minha raiva e confusão, compartilhando história após história de todos os pequenos enganos e equívocos que só agora eu estava juntando. Como ele tolerou tudo isso, eu não sei. Mas, para mim, foi um descarrego inicial que me ajudou a caminhar em direção ao perdão definitivo.
A parte importante seguinte na minha jornada de perdão foi construir empatia por minha ex-esposa. Isso não aconteceu imediatamente. Na verdade, tardou muitos anos até que eu fosse capaz de desenvolver uma nova perspectiva sobre a questão. Foi necessário esse tipo de distância até que eu me tornasse humilde o suficiente para ver como eu mesmo contribuí para o fim do relacionamento. Eu vi minha parte. Eu vi como ela pode ter se sentido aprisionada por mim, pela família e pelos amigos para entrar em um casamento que parecia invejável para estranhos, mas muito provavelmente nunca foi totalmente confortável para ela. Comecei a ver como essas forças podem tê-la influenciado a fazer as escolhas que fez. Agora posso sentir por ela e quão difícil e confuso tudo isso pode ter sido, e posso ver que ela provavelmente não tinha intenção ou desejo de me machucar. Ela se sentiu aprisionada e reagiu a essa experiência. Longe de tudo isso e distante daquela dor que senti, posso dizer que eu realmente queria o que era melhor para ela. Eu esperava que ela tivesse uma vida plena. Por fim, optei por perdoar minha esposa e optei por não me reconciliar. Três anos depois de encontrar aquele bilhete amassado na secadora, decidi pedir o divórcio e segui em frente com um novo espírito de perdão e paz.
Além de ajudar as pessoas a perdoar os outros, os pesquisadores também começaram a explorar maneiras de ajudar as pessoas a perdoar a si mesmas. Marilyn Cornish, psicóloga conselheira da Auburn University, no Alabama, e eu desenvolvemos uma dessas intervenções, com base em um modelo amplo de quatro etapas. As etapas incluem: responsabilidade, remorso, restauração e renovação. Concentramos essa intervenção em ajudar as pessoas que carregavam consigo uma grande culpa por ter ferido outras pessoas.
A abordagem geral de nossa intervenção é ajudar as pessoas a assumirem as devidas responsabilidades pela ofensa ou ferida que causaram, identificando as formas pelas quais elas são culpadas pela dor da outra pessoa. Fora dessa responsabilidade, elas são incentivadas a identificar e expressar o remorso que sentem. Acreditamos que é saudável abraçar nossa culpa e colocar esse sentimento em um contexto realista. A partir deste ponto, é possível avançar para a restauração. Nesta etapa, a pessoa é incentivada a fazer reparações, a restaurar os danos causados aos outros e a seus relacionamentos e a se comprometer novamente com valores ou padrões que possam ter violado ao magoar os outros. Finalmente, a pessoa é capaz de passar para a renovação, que entendemos ser uma substituição da culpa e da autocondenação por um renovado autorrespeito e autocompaixão. Essa renovação é apropriada somente após uma verdadeira contabilidade da ofensa. Uma vez que isso tenha sido feito, é benéfico para a pessoa mudar para um senso renovado de autoaceitação e perdão.
O perdão a si mesmo a ajudou a enfrentar os filhos com mais honestidade e a restaurar o relacionamento com eles.
Testamos essa intervenção em um estudo clínico. Para isso, convidamos pessoas que haviam magoado outras pessoas e queriam se perdoar a participarem de um programa de aconselhamento individual de oito semanas. Das 21 pessoas que completaram o estudo, 12 receberam o tratamento imediatamente e nove o receberam após estarem na lista de espera. Aqueles que receberam o tratamento imediatamente relataram autoperdão significativamente maior e significativamente menos autocondenação e sofrimento psicológico do que aqueles na lista de espera. Na verdade, depois de controlar sua autocondenação e autoperdão, a pessoa média que recebeu o tratamento foi mais indulgente do que aproximadamente 90% das pessoas na lista de espera. Além disso, uma vez que aqueles na lista de espera receberam o tratamento, sua mudança na autocondenação, no perdão a si mesmo e na angústia psicológica igualou o grupo de tratamento.
Vários meses após a conclusão do estudo, recebi um e-mail de uma das clientes. Vou chamá-la de Izzie. Ela escreveu para nos agradecer pelo aconselhamento; ela disse que mudou sua vida. Izzie entrou no estudo porque estava lutando com as implicações de ter tido um caso extraconjugal no passado. Além de se sentir sozinha e desconectada da família como resultado do divórcio que se seguiu, Izzie ainda lutava com a vergonha e a culpa de suas ações. Essa vergonha a levou a se afastar dos filhos e, então, a sentir mais culpa e vergonha por sua incapacidade de cuidá-los e ser a mãe que desejava ser. Em seu e-mail, ela detalhou como o processo de autoperdão a ajudou a assumir a responsabilidade pelos eventos de maneira apropriada e superar o remorso para renovar seus relacionamentos. Ela nos contou como conseguiu encarar os filhos com mais honestidade e ter um relacionamento restaurado com eles. Depois de ter investido tanto tempo em sua própria autocondenação, ela agora estava livre para se relacionar com eles de uma nova maneira e ser mais a mãe que ela queria, e eles precisavam que ela fosse.
O perdão, dos outros e de si mesmo, pode ser um processo poderoso de mudança de vida. Pode mudar a trajetória de um relacionamento ou até mesmo a vida de uma pessoa. Não é a única resposta que uma pessoa pode dar ao ser magoado ou magoar os outros, mas é uma forma eficaz de administrar os momentos inevitáveis de conflito, decepção e dor em nossas vidas. O perdão abrange tanto a realidade da ofensa quanto a empatia e compaixão necessárias para seguir em frente. O verdadeiro perdão não foge da responsabilidade, recompensa ou justiça. Por definição, ele reconhece que algo doloroso, até mesmo errado, foi feito. Simultaneamente, o perdão nos ajuda a abraçar algo além da reação imediata de raiva e dor e da amargura latente que pode resultar. O perdão incentiva uma compreensão mais profunda e compassiva de que todos nós temos falhas em nossas diferentes maneiras e que todos nós precisamos ser perdoados às vezes.
Children who experienced compassionate parenting were more generous than peers
Date: December 1, 2020
Source: University of California – Davis
Summary: Young children who have experienced compassionate love and empathy from their mothers may be more willing to turn thoughts into action by being generous to others, a University of California, Davis, study suggests. Lab studies were done of children at ages 4 and 6.
Young children who have experienced compassionate love and empathy from their mothers may be more willing to turn thoughts into action by being generous to others, a University of California, Davis, study suggests.
In lab studies, children tested at ages 4 and 6 showed more willingness to give up the tokens they had earned to fictional children in need when two conditions were present — if they showed bodily changes when given the opportunity to share and had experienced positive parenting that modeled such kindness. The study initially included 74 preschool-age children and their mothers. They were invited back two years later, resulting in 54 mother-child pairs whose behaviors and reactions were analyzed when the children were 6.
“At both ages, children with better physiological regulation and with mothers who expressed stronger compassionate love were likely to donate more of their earnings,” said Paul Hastings, UC Davis professor of psychology and the mentor of the doctoral student who led the study. “Compassionate mothers likely develop emotionally close relationships with their children while also providing an early example of prosocial orientation toward the needs of others,” researchers said in the study.
The study was published in November in Frontiers in Psychology: Emotion Science. Co-authors were Jonas G. Miller, Department of Psychiatry and Behavioral Sciences, Stanford University (who was a UC Davis doctoral student when the study was written); Sarah Kahle of the Department of Psychiatry and Behavioral Sciences, UC Davis; and Natalie R. Troxel, now at Facebook.
In each lab exercise, after attaching a monitor to record children’s heart-rate activity, the examiner told the children they would be earning tokens for a variety of activities, and that the tokens could be turned in for a prize. The tokens were put into a box, and each child eventually earned 20 prize tokens. Then before the session ended, children were told they could donate all or part of their tokens to other children (in the first instance, they were told these were for sick children who couldn’t come and play the game, and in the second instance, they were told the children were experiencing a hardship.)
At the same time, mothers answered questions about their compassionate love for their children and for others in general. The mothers selected phrases in a survey such as:
“I would rather engage in actions that help my child than engage in actions that would help me.”
“Those whom I encounter through my work and public life can assume that I will be there if they need me.”
“I would rather suffer myself than see someone else (a stranger) suffer.”
Taken together, the findings showed that children’s generosity is supported by the combination of their socialization experiences — their mothers’ compassionate love — and their physiological regulation, and that these work like “internal and external supports for the capacity to act prosocially that build on each other.”
The results were similar at ages 4 and 6.
In addition to observing the children’s propensity to donate their game earnings, the researchers observed that being more generous also seemed to benefit the children. At both ages 4 and 6, the physiological recording showed that children who donated more tokens were calmer after the activity, compared to the children who donated no or few tokens. They wrote that “prosocial behaviors may be intrinsically effective for soothing one’s own arousal.” Hastings suggested that “being in a calmer state after sharing could reinforce the generous behavior that produced that good feeling.”
This work was supported by the Fetzer Institute, Mindfulness Connections, and the National Institute of Mental Health.
Jonas G. Miller, Sarah Kahle, Natalie R. Troxel, Paul D. Hastings. The Development of Generosity From 4 to 6 Years: Examining Stability and the Biopsychosocial Contributions of Children’s Vagal Flexibility and Mothers’ Compassion. Frontiers in Psychology, 2020; 11 DOI: 10.3389/fpsyg.2020.590384
The coronavirus pandemic has triggered some interesting and unusual changes in our buying behavior
Date: September 10, 2020
Source: University of Technology Sydney
Summary: Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper.
Rushing to stock up on toilet paper before it vanished from the supermarket isle, stashing cash under the mattress, purchasing a puppy or perhaps planting a vegetable patch — the COVID-19 pandemic has triggered some interesting and unusual changes in our behavior.
Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper published in the Journal of Behavioral Economics for Policy.
‘Hoarding in the age of COVID-19’ by behavioral economist Professor Michelle Baddeley, Deputy Dean of Research at the University of Technology Sydney (UTS) Business School, examines a range of cross-disciplinary explanations for hoarding and other behavior changes observed during the pandemic.
“Understanding these economic, social and psychological responses to COVID-19 can help governments and policymakers adapt their policies to limit negative impacts, and nudge us towards better health and economic outcomes,” says Professor Baddeley.
Governments around the world have implemented behavioral insights units to help guide public policy, and influence public decision-making and compliance.
Hoarding behavior, where people collect or accumulate things such as money or food in excess of their immediate needs, can lead to shortages, or in the case of hoarding cash, have negative impacts on the economy.
“In economics, hoarding is often explored in the context of savings. When consumer confidence is down, spending drops and households increase their savings if they can, because they expect bad times ahead,” explains Professor Baddeley.
“Fear and anxiety also have an impact on financial markets. The VIX ‘fear’ index of financial market volatility saw a dramatic 564% increase between November 2019 and March 2020, as investors rushed to move their money into ‘safe haven’ investments such as bonds.”
While shifts in savings and investments in the face of a pandemic might make economic sense, the hoarding of toilet paper, which also occurred across the globe, is more difficult to explain in traditional economic terms, says Professor Baddeley.
Behavioural economics reveals that our decisions are not always rational or in our long term interest, and can be influenced by a wide range of psychological factors and unconscious biases, particularly in times of uncertainty.
“Evolved instincts dominate in stressful situations, as a response to panic and anxiety. During times of stress and deprivation, not only people but also many animals show a propensity to hoard.”
Another instinct that can come to the fore, particularly in times of stress, is the desire to follow the herd, says Professor Baddeley, whose book ‘Copycats and Contrarians’ explores the concept of herding in greater detail.
“Our propensity to follow others is complex. Some of our reasons for herding are well-reasoned. Herding can be a type of heuristic: a decision-making short-cut that saves us time and cognitive effort,” she says.
“When other people’s choices might be a useful source of information, we use a herding heuristic and follow them because we believe they have good reasons for their actions. We might choose to eat at a busy restaurant because we assume the other diners know it is a good place to eat.
“However numerous experiments from social psychology also show that we can be blindly susceptible to the influence of others. So when we see others rushing to the shops to buy toilet paper, we fear of missing out and follow the herd. It then becomes a self-fulfilling prophesy.”
Behavioral economics also highlights the importance of social conventions and norms in our decision-making processes, and this is where rules can serve an important purpose, says Professor Baddeley.
“Most people are generally law abiding but they might not wear a mask if they think it makes them look like a bit of a nerd, or overanxious. If there is a rule saying you have to wear a mask, this gives people guidance and clarity, and it stops them worrying about what others think.
“So the normative power of rules is very important. Behavioral insights and nudges can then support these rules and policies, to help governments and business prepare for second waves, future pandemics or other global crises.”
When historian Frederick Jackson Turner presented his famous thesis on the US frontier in 1893, he described the “coarseness and strength combined with acuteness and acquisitiveness” it had forged in the American character.
Now, well into the 21st century, researchers led by the University of Cambridge have detected remnants of the pioneer personality in US populations of once inhospitable mountainous territory, particularly in the Midwest.
A team of scientists algorithmically investigated how landscape shapes psychology. They analyzed links between the anonymised results of an online personality test completed by over 3.3 million Americans, and the “topography” of 37,227 US postal—or ZIP—codes.
The researchers found that living at both a higher altitude and an elevation relative to the surrounding region—indicating “hilliness”—is associated with a distinct blend of personality traits that fits with “frontier settlement theory”.
“The harsh and remote environment of mountainous frontier regions historically attracted nonconformist settlers strongly motivated by a sense of freedom,” said researcher Friedrich Götz, from Cambridge’s Department of Psychology.
“Such rugged terrain likely favored those who closely guarded their resources and distrusted strangers, as well as those who engaged in risky explorations to secure food and territory.”
“These traits may have distilled over time into an individualism characterized by toughness and self-reliance that lies at the heart of the American frontier ethos” said Götz, lead author of the study.
“When we look at personality across the whole United States, we find that mountainous residents are more likely to have psychological characteristics indicative of this frontier mentality.”
Götz worked with colleagues from the Karl Landsteiner University of Health Sciences, Austria, the University of Texas, US, the University of Melbourne in Australia, and his Cambridge supervisor Dr. Jason Rentfrow. The findings are published in the journal Nature Human Behaviour.
The research uses the “Big Five” personality model, standard in social psychology, with simple online tests providing high-to-low scores for five fundamental personality traits of millions of Americans.
The mix of characteristics uncovered by study’s authors consists of low levels of “agreeableness”, suggesting mountainous residents are less trusting and forgiving—traits that benefit “territorial, self-focused survival strategies”.
Low levels of “extraversion” reflect the introverted self-reliance required to thrive in secluded areas, and a low level of “conscientiousness” lends itself to rebelliousness and indifference to rules, say researchers.
“Neuroticism” is also lower, suggesting an emotional stability and assertiveness suited to frontier living. However, “openness to experience” is much higher, and the most pronounced personality trait in mountain dwellers.
“Openness is a strong predictor of residential mobility,” said Götz. “A willingness to move your life in pursuit of goals such as economic affluence and personal freedom drove many original North American frontier settlers.”
“Taken together, this psychological fingerprint for mountainous areas may be an echo of the personality types that sought new lives in unknown territories.”
The researchers wanted to distinguish between the direct effects of physical environment and the “sociocultural influence” of growing up where frontier values and identities still hold sway.
To do this, they looked at whether mountainous personality patterns applied to people born and raised in these regions that had since moved away.
The findings suggest some “initial enculturation” say researchers, as those who left their early mountain home are still consistently less agreeable, conscientious and extravert, although no such effects were observed for neuroticism and openness.
The scientists also divided the country at the edge of St. Louis—”gateway to the West”—to see if there is a personality difference between those in mountains that made up the historic frontier, such as the Rockies, and eastern ranges e.g. the Appalachians.
While mountains continue to be a “meaningful predictor” of personality type on both sides of this divide, key differences emerged. Those in the east are more agreeable and outgoing, while western ranges are a closer fit for frontier settlement theory.
In fact, the mountainous effect on high levels of “openness to experience” is ten times as strong in residents of the old western frontier as in those of the eastern ranges.
The findings suggest that, while ecological effects are important, it is the lingering sociocultural effects—the stories, attitudes and education—in the former “Wild West” that are most powerful in shaping mountainous personality, according to scientists.
They describe the effect of mountain areas on personality as “small but robust”, but argue that complex psychological phenomena are influenced by many hundreds of factors, so small effects are to be expected.
“Small effects can make a big difference at scale,” said Götz. “An increase of one standard deviation in mountainousness is associated with a change of around 1% in personality.”
“Over hundreds of thousands of people, such an increase would translate into highly consequential political, economic, social and health outcomes.”
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
New research provides evidence that people from higher social classes are worse at understanding the minds of others compared to those from lower social classes. The study has been published in the Personality and Social Psychology Bulletin.
“My co-author and I set out to examine a question that we deemed important given the trend of rising economic inequality in American society today: How does access to resources (e.g., money, education) influence the way we process information about other human beings?” said study author Pia Dietze, a postdoctoral scholar at the University of California, Irvine.
“We tried to answer this question by examining two essential components within the human repertoire to understand each other’s minds: the way in which we read emotional states from other people’s faces and how inclined we are to take the visual perspective of another person.”
For their study, the researchers recruited 300 U.S. individuals from Amazon’s Mechanical Turk platform and another 452 U.S. individuals from the Prolific Academic platform. The participants completed a test of cognitive empathy called the Reading the Mind in the Eyes Test, which assesses the ability to recognize or infer someone else’s state of mind from looking only at their eyes and surrounding areas.
The researchers also had 138 undergraduates at New York University complete a test of visual perspective-taking known as the Director Task, in which they were required to move objects on a computer screen based on the perspective of a virtual avatar.
The researchers found that lower-class people tended to perform better on the Reading the Mind in the Eyes Test and Director Task than their higher-class counterparts.
“We find that individuals from lower social class backgrounds are better at identifying emotions from other people’s faces and are more likely to spontaneously take another person’s visual perspective. This is in line with a large body of work documenting a tendency for lower-class people to be more socially attuned to others. In addition, our research shows that this can happen at a very basic level; within seconds or milliseconds of encountering a new face or person,” Dietze told PsyPost.
But like all research, the new study includes some limitations.
“This research is based on correlational data. As such, we need to see this research as part of a larger body work to answer the question of causality. However, the insights gained from our study allows us to speculate about how and why we think these tendencies develop,” Dietze explained.
“We theorize that social class can influence social information processing (i.e., the processing of information about other people) at such a basic level because social classes can be conceptualized as a form of culture. As such, social class cultures (like other forms of culture, for example, national cultures), have a pervasive psychological influence that impact many aspects of life, at times even at spontaneous levels.”
Is it safe to go to the grocery store? Can my kids have a play date? Will the other child wear a mask? Can I send them back to school? When my boss asks me to come back to the office, should I?
Shayla Bell lies awake at night racking her brain for answers and preparing for another day of unprecedented choices.
“There’s all these little, small decisions all the time,” said Bell, a suburban Chicago retail professional with two kids. “I find myself being my own devil’s advocate so often to try to reach the best conclusion. And I’m tired.”
“It’s a state of low willpower that results from having invested effort into making choices,” said Roy Baumeister, a psychology professor at Florida State University who coined the term in 2010. “It leads to putting less effort into making further choices, so either choices are avoided or they are made in a very superficial way.”
Like a mental gas tank, the human brain has a limited capacity of energy, and as you make decisions throughout the day, you deplete that resource. As you become fatigued, you may be inclined to avoid additional decisions, stick to the status quo or base a decision on a single criteria, Baumeister said.
When we’re able to maintain daily routines, the brain can automate decisions and rely on heuristics – or mental shortcuts – to avoid fatigue. But the pandemic has disrupted many of our routines, forcing us to allocate more mental energy to decision-making.
The effects of decision fatigue have serious implications for people in positions of authority. Jonathan Levav, who studies behavioral decision theory at Stanford University, found that judges serving on parole boards in Israel were more likely to give favorable rulings at the very beginning of the workday or after a food break than later in a sequence of cases, after the judges had made more decisions.
“If you make a lot of decisions repeatedly, that has an effect on subsequent decisions,” Levav said. “As people make more decisions, they’re more likely to simplify whatever subsequent decisions they’re dealing with.”
We’re not just making a greater number of daily decisions. We’re also making high-stakes, moral decisions, said Elizabeth Yuko, a writer and staff member at the Fordham University Center for Ethics Education.
“It’s fatigue with making decisions that have consequences we’ve never had to deal with before,” Yuko said. “These things come with such a moral weight on them, it comes with even more stress.”
For parents and guardians, in particular, the stakes are high. Erin Scarpa, a mother of two who works at a bank in New Jersey, said she temporarily relocated her family to North Carolina specifically to avoid making decisions about socializing with neighbors. Scarpa said she’s particularly concerned about reports of patients suffering lasting damage from COVID-19.
“You’re talking about decisions that could limit your child’s life forever,” Scarpa said. “That’s a whole other concept.”
Sneha Dave, a recent college graduate living with an inflammatory bowel disease and unidentified respiratory condition, said she struggled with crippling decision fatigue at the beginning of the pandemic.
“There’s been so many times where I go to the grocery store where I turn around because there are too many cars there. I spend a lot of time deciding what the right time to go to the grocery store is or whether I should go in,” she said.
Dave said she’s still grappling with a big decision – whether or not to pursue a round of treatment for her bowel disease, which would severely weaken her immune system – but she’s slowly learned how to cope with her decision fatigue.
“The chronic illness community has been able to adapt significantly better and make these decisions a little easier because these are decisions we’ve made our whole lives,” Dave said.
How statewide COVID-19 policies affect decision fatigue
Streamlined state and nationwide policies on COVID-19 have the potential to alleviate decision fatigue, some researchers said, but the notion of greater regulation carries contentious political implications.
“The more that requirements are in place, such as mask mandates, the less it’s a personal choice about what to do. And it makes it easier to make other, related decisions,” said Kathleen Vohs, a professor at the University of Minnesota who studies self-control. “You don’t have to agonize about whether it’s safe to go to the grocery store when you know that others will have masks on.”
Mandates may also cause people to feel depleted if they find it difficult to comply with a policy, researchers said. Others may be making such specific, preferential decisions that statewide policies wouldn’t be enough to alleviate decision fatigue.
Sheena Iyengar, a Columbia Business School professor and author studying the psychology and economics of choice, is gathering data on how Americans feel about statewide COVID-19 policies.
Contrary to classical economic theory, Iyengar’s work has found that, in some contexts, people may prefer to have their choices limited or entirely removed. For example, people are more likely to purchase jams or chocolates – or to undertake optional class essay assignments – when offered a limited rather than extensive array of choices. Study participants reported greater satisfaction with their selections when their options had been limited.
A similar trend may be playing out when it comes to COVID-19 policies, Iyengar said. Her preliminary findings suggest that people living in states with face mask policies reported being “happier” than those in states without mask mandates. The findings may simply be driven by political preferences, Iyengar said.
“There’s a naturally occurring experiment, although that experiment falls along political lines,” she said.
Tips for avoiding decision fatigue
There are some simple strategies for avoiding decision fatigue, researchers said. Many center on general health and well-being, such as maintaining a nutritious diet, getting a full night’s sleep and exercising regularly. Others focus on timing your decisions and developing routines to cut out unnecessary choices.
“Willpower diminishes and decision fatigue increases over the course of the day, so if you have important decisions to make, make them in the morning after a full night’s sleep and a good breakfast,” Baumeister said. “Be aware this is affecting you.”
Plan out tomorrow’s schedule the day before, said Dovid Spinka, a staff clinician at the Center for Anxiety in New York City. Prep or plan your meals for the week. Lay out your clothes in the evening, or – like Steve Jobs – develop a uniform.
If you begin to fade during the day, take a short break, go for a walk or practice mindfulness or breathing exercises, Spinka said. Prioritize your decisions, and try to focus on one at a time. If you’re facing a big decision but feel drained, take a nap or grab a snack. Write down your initial thoughts, but don’t make the decision yet. Come back to it when you’re feeling refreshed, or proactively delay the decision to a set date.
Especially in highly emotional times, people who tend to suppress their emotions may be more prone to experience decision fatigue, said Grant Pignatiello, a researcher at Case Western Reserve University. It’s important to be aware of how you’re feeling and talk to others about it.
“We are all going through a collective trauma of this pandemic, so it’s important that we cut ourselves a little slack. If we need to take a nap at the end of the day, watch Netflix or go for a walk, it’s OK,” Pignatiello said.
For Bell, that means granting herself some grace.
“I feel like we’re all – even the coolest cucumbers – we’re all at a higher stress level now,” she said. “So try to have some grace for yourself and others, and understand that we’re all doing the best we think we can.”
Having strong, biased opinions may say more about your own individual way of behaving in group situations than it does about your level of identification with the values or ideals of any particular group, new research suggests.
This behavioural trait – which researchers call ‘groupiness’ – could mean that individuals will consistently demonstrate ‘groupy’ behaviour across different kinds of social situations, with their thoughts and actions influenced by simply being in a group setting, whereas ‘non-groupy’ people aren’t affected in the same way.
“It’s not the political group that matters, it’s whether an individual just generally seems to like being in a group,” says economist and lead researcher Rachel Kranton from Duke University.
“Some people are ‘groupy’ – they join a political party, for example. And if you put those people in any arbitrary setting, they’ll act in a more biased way than somebody who has the same political opinions, but doesn’t join a political party.”
In an experiment with 141 people, participants were surveyed on their political affiliations, which identified them as self-declared Democrats or Republicans, or as subjects who leaned more Democrat or Republican in terms of their political beliefs (called Independents, for the purposes of the study).
They also took part in a survey that asked them a number of seemingly neutral questions about their aesthetic preferences in relation to a series of artworks, choosing favourites among similar-looking paintings or different lines of poetry.
After these exercises, the participants took part in tests where they were placed in groups – either based around political affiliations (Democrats or Republicans), or more neutral categorisations reflecting their answers about which artworks they preferred. In a third test, the groups were random.
While in these groups, the participants ran through an income allocation exercise, in which they could choose to allocate various amounts of money to themselves, to fellow group members, or to members of the other group.
The researchers expected to find bias in terms of these income allocations based around political mindsets, with people giving themselves more money, along with people who shared their political persuasion. But they also found something else.
“We compare Democrats with D-Independents and find that party members do show more in-group bias; on average, their choices led to higher income for in-group participants,” the authors explain in their study.
“Yet, these party-member participants also show more in-group bias in a second nonpolitical setting. Hence, identification with the group is not necessarily the driver of in-group bias, and the analysis reveals a set of subjects who consistently shows in-group bias, while another does not.”
According to the data, there exists a subpopulation of ‘groupy’ people and a subpopulation of ‘non-groupy’ people – actions of the former type are influenced by being in group settings, in which case they are more likely to demonstrate bias against others outside their group.
By contrast, the latter type, non-groupy individuals, don’t display this kind of tendency, and are more likely to act the same way, regardless of whether or not they’re in a group setting. These non-groupy individuals also seem to make faster decisions than groupy people, the team found.
“We don’t know if non-groupy people are faster generally,” Kranton says.
“It could be they’re making decisions faster because they’re not paying attention to whether somebody is in their group or not each time they have to make a decision.”
Of course, as illuminating as the discovery of this apparent trait is, we need a lot more research to be sure we’ve identified something discrete here.
After all, this is a pretty small study all told, and the researchers acknowledge the need to conduct the same kind of experiments with participants in several settings, to support the foundations of their groupiness concept, and to try to identify what it is that predisposes people to this kind of groupy or non-groupy mindset.
“There’s some feature of a person that causes them to be sensitive to these group divisions and use them in their behaviour across at least two very different contexts,” one of the team, Duke University psychologist Scott Huettel, explains.
“We didn’t test every possible way in which people differentiate themselves; we can’t show you that all group-minded identities behave this way. But this is a compelling first step.”
Summary: From Sinatra to Katy Perry, celebrities have long sung about the power of a smile — how it picks you up, changes your outlook, and generally makes you feel better. But is it all smoke and mirrors, or is there a scientific backing to the claim? Groundbreaking research confirms that the act of smiling can trick your mind into being more positive, simply by moving your facial muscles.
From Sinatra to Katy Perry, celebrities have long sung about the power of a smile — how it picks you up, changes your outlook, and generally makes you feel better. But is it all smoke and mirrors, or is there a scientific backing to the claim?
Groundbreaking research from the University of South Australia confirms that the act of smiling can trick your mind into being more positive, simply by moving your facial muscles.
With the world in crisis amid COVID-19, and alarming rises of anxiety and depression in Australia and around the world, the findings could not be more timely.
The study, published in Experimental Psychology, evaluated the impact of a covert smile on perception of face and body expressions. In both scenarios, a smile was induced by participants holding a pen between their teeth, forcing their facial muscles to replicate the movement of a smile.
The research found that facial muscular activity not only alters the recognition of facial expressions but also body expressions, with both generating more positive emotions.
Lead researcher and human and artificial cognition expert, UniSA’s Dr Fernando Marmolejo-Ramos says the finding has important insights for mental health.
“When your muscles say you’re happy, you’re more likely to see the world around you in a positive way,” Dr Marmolejo-Ramos says.
“In our research we found that when you forcefully practise smiling, it stimulates the amygdala — the emotional centre of the brain — which releases neurotransmitters to encourage an emotionally positive state.
“For mental health, this has interesting implications. If we can trick the brain into perceiving stimuli as ‘happy’, then we can potentially use this mechanism to help boost mental health.”
The study replicated findings from the ‘covert’ smile experiment by evaluating how people interpret a range of facial expressions (spanning frowns to smiles) using the pen-in-teeth mechanism; it then extended this using point-light motion images (spanning sad walking videos to happy walking videos) as the visual stimuli.
Dr Marmolejo-Ramos says there is a strong link between action and perception.
“In a nutshell, perceptual and motor systems are intertwined when we emotionally process stimuli,” Dr Marmolejo-Ramos says.
“A ‘fake it ‘til you make it’ approach could have more credit than we expect.”
Fernando Marmolejo-Ramos, Aiko Murata, Kyoshiro Sasaki, Yuki Yamada, Ayumi Ikeda, José A. Hinojosa, Katsumi Watanabe, Michal Parzuchowski, Carlos Tirado, Raydonal Ospina. Your Face and Moves Seem Happier When I Smile. Experimental Psychology, 2020; 67 (1): 14 DOI: 10.1027/1618-3169/a000470