Arquivo da tag: Psicologia

The Terrible Costs of a Phone-Based Childhood (The Atlantic)

theatlantic.com

The environment in which kids grow up today is hostile to human development.

By Jonathan Haidt

Photographs by Maggie Shannon

MARCH 13, 2024


Two teens sit on a bed looking at their phones

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.


As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

The Causes of Climate Change (Psychology Today)

Human-caused climate change is not our main challenge: It is certain values.

Ilan Kelman Ph.D.

Posted February 21, 2021 

We are told that 2030 is a significant year for global sustainability targets. What could we really achieve comprehensively from now until then, especially with climate change dominating so many discussions and proposals?

Photo Taken by Ilan Kelman
More sustainable transport on water and land, with many advantages beyond tackling climate change (Leeuwarden, the Netherlands). Source: Photo Taken by Ilan Kelman

Several United Nations agreements use 2030 for their timeframe, including the Sustainable Development Goals, the Sendai Framework for Disaster Risk Reduction, the Paris Agreement for tackling human-caused climate change, and the Addis Ababa Action Agenda on Financing for Development. Aside from the oddity of having separate agreements with separate approaches from separate agencies to achieve similar goals, climate change is often explicitly separated as a topic. Yet it brings little that is new to the overall and fundamental challenges causing our sustainability troubles.

Consider what would happen if tomorrow we magically reached exactly zero greenhouse gas emissions. Overfishing would continue unabated through what is termed illegal, unreported, and unregulated fishing, often in protected areas such as Antarctic waters. Demands from faraway markets would still devastate nearshore marine habitats and undermine local practices serving local needs.

Deforestation would also continue. Examples are illegal logging in protected areas of Borneo and slashing-and-burning through the Amazon’s rainforest, often to plant products for supermarket shelves appealing to affluent populations. Environmental exploitation and ruination did not begin with, and is not confined to, climate change.

A similar ethos persists for human exploitation. No matter how awful the harm, human trafficking, organ harvesting, child marriage, child labour, female genital mutilation, and arms deals would not end with greenhouse gas emissions.

If we solved human-caused climate change, then humanity—or, more to the point, certain sectors of humanity—would nonetheless display horrible results in wrecking people and ecosystems. It comes from a value favouring immediate exploitation of any resource without worrying about long-term costs. It sits alongside the value of choosing to live out of balance with the natural environment from local to global scales.

These are exactly the same values causing the climate to change quickly and substantively due to human activity. In effect, it is about using fossil fuels as a resource as rapidly as possible, irrespective of the negative social and environmental consequences.

Changing these values represents the fundamental challenge. Doing so ties together all the international efforts and agreements.

The natural environment, though, does not exist in isolation from us. Human beings have never been separate from nature, even when we try our best to divorce society from the natural environments around us. Our problematic values are epitomised by seeing nature as being at our service, different or apart from humanity.

Human-caused climate change is one symptom among many of such unsustainable and destructive values. Referring to the “climate crisis” or “climate emergency” is misguided since similar crises and emergencies manifest for similar reasons, including overfishing, deforestation, human exploitation, and an industry selling killing devices.

The real crisis and the real emergency are certain values. These values lead to behaviour and actions which are the antithesis of what the entire 2030 agenda aims to achieve. We do a disservice to ourselves and our place in the environment by focusing on a single symptom, such as human-caused climate change.

Revisiting our values entails seeking fundaments for what we seek for 2030—and, more importantly, beyond. One of our biggest losses is in caring: caring for ourselves and for people and environments. Dominant values promote inward-looking, short-term thinking for action yielding immediate, superficial, and short-lived gains.

We ought to pivot sectors with these values toward caring about the long-term future, caring for people, caring for nature, and especially caring for ourselves—all of us—within and connected to nature. A caring pathway to 2030 is helpful, although we also need an agenda mapping out a millennium (and more) beyond this arbitrary year. Rather than using “social capital” and “natural capital” to define people and the environment, and rather than treating our skills and efforts as commodities, our values must reflect humanity, caring, integration with nature, and many other underpinning aspects.

When we fail to do so, human-caused climate change demonstrates what manifests, but it is only a single example from many. Placing climate change on a pedestal as the dominant or most important topic distracts from the depth and breadth required to identify problematic values and then morph them into constructive ones.

Focusing on the values that cause climate change and all the other ills is a baseline for reaching and maintaining sustainability. Then, we would not only solve human-caused climate change and achieve the 2030 agenda, but we would also address so much more for so much longer.

The way out of burnout (The Economist)

economist.com

The Economist

A psychoanalyst explains why for people feeling “burnt out”, simply trying to relax doesn’t always work

July 28, 2016


By Josh Cohen

A patient of mine named Elliot recently took a week off from his demanding job as a GP. He felt burnt out and badly needed to rest. The plan was to sleep late, read a novel, take the odd leisurely walk, maybe catch up on “Game of Thrones”. But somehow he found himself instead packing his schedule with art museums, concerts, theatre, meetings with friends in hot new bars and restaurants. Then there were the visits to the gym, Spanish lessons, some flat-pack furniture assemblage.

During the first of his twice-weekly evening sessions, he wondered if he shouldn’t slow down. He felt as exhausted as ever. Facebook and Twitter friends were joking about how it all sounded like harder work than work. “I’m trying to figure out how I’ve managed to be doing so much when I didn’t want to be doing anything. Somehow not doing anything seems impossible. I mean, how can you just…do nothing?!”

When Elliot protests that he can’t just do nothing, he is seeing and judging himself from the perspective of a culture that looks with disdain at anything that smacks of inactivity. Under constant self-scrutiny as to whether he is being sufficiently productive, he feels ashamed when he judges himself to have come up short in this regard. But this leaves him at once too drained to work and unable to rest.

As I describe in my feature for the August/September issue of “1843”, this is the basic predicament of the burnout sufferer: a feeling of exhaustion accompanied by a nervy compulsion to go on regardless is a double bind that makes it very difficult to know how to cope. Burnout involves the loss of the capacity to relax, to “just do nothing”. It prevents an individual from embracing the ordinary pleasures – sleep, long baths, strolling, long lunches, meandering conversation – that induce calm and contentment. It can be counterproductive to recommend relaxing activities to someone who complains that the one thing they cannot do is relax.

So what does it take to recover the capacity to do nothing, or very little? I might be expected at this point to leap to psychoanalysis as an instant remedy. But psychoanalysis is emotionally demanding, time-consuming and often expensive. Nor does it work for everyone (a basic truth of all therapies, physical or mental).

In less severe cases of burnout, it is often the case that difficulties inducing nervous exhaustion are more external than internal. Time and energy may be drained by life events (bereavement, divorce, changes in financial status and so on) as well as the demands of work.

In such cases, it is worth turning in the first instance to more external solutions – cutting working hours as much as possible, carving out more time to relax or for contemplative practices such as yoga and meditation. This is as much a matter of discovering a remedy as the remedy itself. Merely listening and attending to the needs of the inner self as opposed to the demands of the outside world can have a transformative effect.

But such solutions will seem unrealistic to some sufferers both practically and psychologically. Practically in the sense that many of us are employed in sectors that demand punishing hours and unstinting commitment; psychologically in the sense that reducing working hours, and so taking oneself out of the highest levels of the game, is likely to induce more rather than less anxiety in someone driven relentlessly to achieve more.

So while there are many means by which we can be helped to relax, the predicament of severe burnout is precisely that you cannot be helped to relax. Where burnout has psychological roots, psychoanalysis may be able to help.

One way is its “form”. The nervous exhaustion of burnout results from their enslavement to an endless to-do list packed with short- and long-range tasks. In a psychotherapy session, you sit or lie down and begin to talk with no particular agenda, letting yourself go wherever your minds takes you. For portions of a session you might be silent, discovering the value of simply being with someone, without having to justify or account for yourself, instilling an appreciation for what the American psychoanalyst Jonathan Lear calls “mental activity without a purpose.”

Another way is the “content” of psychoanalysis. Talking to a therapist can help us discover those elements in our own history and character that make us particularly vulnerable to specific difficulties such as burnout. In my feature for “1843”, I discuss how two patients came from early childhood to associate their worth and value with their levels of achievement. Under constant pressure from within to “be their best”, they were liable to feel empty and exhausted when, inevitably, they felt they’d failed to live up to this ideal self-image.

This was very much the case for Elliot, and goes some way to explaining why the idea of “just doing nothing” so scandalised him. Even today, as they approach old age, Elliot could never imagine his parents putting their feet up talking, reading or watching TV. He remembers family meals taken quickly, with one or both parents in a hurry to rush off to one commitment or another. His own life was heavily scheduled with homework and extra-curricular lessons, and he was never more forcefully admonished by either parent than when he was being “lazy”. “They were kind of compulsively active”, he said, “and made me feel it was shameful to waste time. You could imagine the seats of their chairs were rigged to administer a jolt of current if they sat on them for more than ten minutes.” Only now is he beginning to ask why they, and he in turn, are like this, and why being at rest for any length of time is equivalent in their minds to “wasting” it.

Insight like this can be helpful to challenge our unthinkingly internalised habits of working and our dogmas as to what constitutes a “productive” use of our time. It encourages us to think about what kind of life would be worth living, rather than simply living the life we assume we’re stuck with.


Is there more to burnout than working too hard?

The Economist

Josh Cohen argues that the root of the problem lies deeper than that


June 29, 2016

By Josh Cohen

When Steve first came to my consulting room, it was hard to square the shambling figure slumped low in the chair opposite with the young dynamo who, so he told me, had only recently been putting in 90-hour weeks at an investment bank. Clad in baggy sportswear that had not graced the inside of a washing machine for a while, he listlessly tugged his matted hair, while I tried, without much success, to picture him gliding imperiously down the corridors of some glassy corporate palace.

Steve had grown up as an only child in an affluent suburb. He recalls his parents, now divorced, channelling the frustrations of their loveless, quarrelsome marriage into the ferocious cultivation of their son. The straight-A grades, baseball-team captaincy and Ivy League scholarship he eventually won had, he felt, been destined pretty much from the moment he was born. “It wasn’t so much like I was doing all this great stuff, more like I was slotting into the role they’d already scripted for me.” It seemed as though he’d lived the entirety of his childhood and adolescence on autopilot, so busy living out the life expected of him that he never questioned whether he actually wanted it.

Summoned by the bank from an elite graduate finance programme in Paris, he plunged straight into its turbocharged working culture. For the next two years, he worked on the acquisition of companies with the same breezy mastery he’d once brought to the acquisition of his academic and sporting achievements. Then he realised he was spending a lot of time sunk in strange reveries at his workstation, yearning to go home and sleep. When the phone or the call of his name woke him from his trance, he would be gripped by a terrible panic. “One time this guy asked me if I was OK, like he was really weirded out. So I looked down and my shirt was drenched in sweat.”

One day a few weeks later, when his 5.30am alarm went off, instead of leaping out of bed he switched it off and lay there, staring at the wall, certain only that he wouldn’t be going to work. After six hours of drifting between dreamless sleep and blank wakefulness, he pulled on a tracksuit and set off for the local Tesco Metro, piling his basket with ready meals and doughnuts, the diet that fuelled his box-set binges.

Three months later, he was transformed into the inertial heap now slouched before me. He did nothing; he saw no one. The concerned inquiries of colleagues quickly tailed off. He was intrigued to find the termination of his employment didn’t bother him. He spoke to his parents in Chicago only as often as was needed to throw them off the scent. They knew the hours he’d been working, so didn’t expect to hear from him all that much, and he never told them anything important anyway.

Can anyone say they’ve never felt some small intimation of Steve’s urge to shut down? I certainly have, sitting glassy-eyed on the sofa at the end of a long working day. My listlessness is tugged by the awareness, somewhere at the edge of my consciousness, of an expanding to-do list, and of unread messages and missed calls vibrating unforgivingly a few feet away. But my sullen inertia plateaus when I drop my eyes to the floor and see a glass or a newspaper that needs picking up. The object in question seems suddenly to radiate a repulsive force that prevents me from so much as extending my forearm. My mind and body scream in protest against its outrageous demand that I bend and retrieve it. Why, I plead silently, should I have to do this? Why should I have to do anything ever again?

We commonly use the term “burnout” to describe the state of exhaustion suffered by the likes of Steve. It occurs when we find ourselves taken over by this internal protest against all the demands assailing us from within and without, when the momentary resistance to picking up a glass becomes an ongoing state of mind.

Burnout didn’t become a recognised diagnosis until 1974, when the German-American psychologist Herbert Freudenberger applied the term to the increasing number of cases he encountered of “physical or mental collapse caused by overwork or stress”. The relationship to stress and anxiety is crucial, for it distinguishes burnout from simple exhaustion. Run a marathon, paint your living room, catalogue your collection of tea caddies, and the tiredness you experience will be infused with a deep satisfaction and faintly haloed in smugness – feelings that confirm you’ve discharged your duty to the world for at least the remainder of the day.

The exhaustion experienced in burnout combines an intense yearning for this state of completion with the tormenting sense that it cannot be attained, that there is always some demand or anxiety or distraction which can’t be silenced. In his 1960 novel “A Burnt-Out Case” (the title may have helped bring the term into general circulation), Graham Greene parallels the mental and spiritual burnout of Querry, the protagonist, with the “burnt-out cases” of leprosy he witnesses in the Congo. Querry believes he’s “come to the end of desire”, his emotions amputated like the limbs of the lepers he encounters, and the rest of his life will be endured in a state of weary indifference.

But Querry’s predicament is that, as long as he’s alive, he can’t reach a state of impassivity; there will always be something or someone to disturb him. I frequently hear the same yearning expressed in my consulting room – the wish for the world to disappear, for a cessation of any feelings, whether positive or negative, that intrude on the patient’s peace, alongside the painful awareness that the world’s demands are waiting on the way out.

You feel burnout when you’ve exhausted all your internal resources, yet cannot free yourself of the nervous compulsion to go on regardless. Life becomes something that won’t stop bothering you. Among its most frequent and oppressive symptoms is chronic indecision, as though all the possibilities and choices life confronts you with cancel each other out, leaving only an irritable stasis.

Anxieties about burnout seem to be everywhere these days. A quick glance through the papers yields stories of young children burnt out by exams, teenagers by the never-ending cacophony of social media, women by the competing demands of work and motherhood, couples by a lack of time for each other and their family life.

But while it may seem to be a problem rooted in our cultural circumstances, burnout has a history stretching back many centuries. The condition of melancholic world-weariness was recognised across the ancient world – it is the voice that speaks out in the biblical book of Ecclesiastes (“All is vanity! What does a man gain by all the toil at which he toils under the sun?”), and diagnosed by the earliest Western medical authorities Hippocrates and Galen. It appears in medieval theology as acedia, a listless indifference to worldly life brought about by spiritual exhaustion. During the Renaissance, a period of relentless change, Albrecht Dürer’s 1514 engraving “Melancholia I” was the most celebrated of many images depicting man despondent at the transience of life.

But it was not until the second half of the 19th century that writers began to link this condition to the specific stresses of modern life. In 1879, the American neurologist George Beard published “Neurasthenia: (nervous exhaustion) with remarks on treatment”, identifying neurasthenia as an illness endemic to the pace and strain of modern industrial life. The fin-de-siècle neurasthenic, in whom exhaustion and innervation converge, uncannily anticipates the burnout of today. They have in common an overloaded and overstimulated nervous system. A culture of chronic overwork is prevalent within many professions, from banking and law to media and advertising, health, education and other public services. A 2012 study by the University of Southern California found that every one of the 24 entry-level bankers it followed developed a stress-related illness (such as insomnia, alcoholism or an eating disorder) within a decade on the job. A much larger 2014 survey by eFinancialCareers of 9,000 financial workers in cities across the globe (including Hong Kong, London, New York and Frankfurt), showed bankers typically working between 80 and 120 hours a week, the majority feeling at least “partially” burnt out, with somewhere between 10% and 20% (depending on the country) describing themselves as “totally” burnt out.

A young banker who sees me in the early morning, the only available slot in her working day, often leaves a message at 3am to let me know she won’t make it as she’s only just leaving the office – a predicament especially bitter because her psychoanalytic session is the one hour in the day in which she can switch off her phone and find some respite from her job. Increasing numbers of my patients say they value a session simply because it provides a rare chance for a moment of stillness freed from the obligation to talk.

A walk in the country or a week on the beach should, theoretically, provide a similar sense of relief. But such attempts at recuperation are too often foiled by the nagging sense of being, as one patient put it, “stalked” by the job. A tormenting dilemma arises: keep your phone in your pocket and be flooded by work-related emails and texts; or switch it off and be beset by unshakeable anxiety over missing vital business. Even those who succeed in losing the albatross of work often quickly fall prey to the virus they’ve spent the previous weeks fending off.

Burnout increases as work insinuates itself more and more into every corner of life – if a spare hour can be snatched to read a novel, walk the dog or eat with one’s family, it quickly becomes contaminated by stray thoughts of looming deadlines. Even during sleep, flickering images of spreadsheets and snatches of management speak invade the mind, while slumbering fingers hover over the duvet, tapping away at a phantom keyboard.

Some companies have sought to alleviate the strain by offering sessions in mindfulness. But the problem with scheduling meditation as part of that working day is that it becomes yet another task at which you can succeed or fail. Those who can’t clear out their mind need to try harder – and the very exercises intended to ease anxiety can end up exacerbating it. Schemes cooked up by management theorists since the 1970s to alleviate the tedium and tension of the office through what might be called the David Brent effect – the chummy, backslapping banter, the paintballing away-days, the breakout rooms in bouncy castles – have simply blurred the lines between work and leisure, and so ended up screwing the physical and mental confines of the workplace even tighter.

But it is not just our jobs that overwork our minds. Electronic communication and social media have come to dominate our daily lives, in a transformation that is unprecedented and whose consequences we can therefore only guess at. My consulting room hums daily with the tense expectation induced by unanswered texts and ignored status updates. Our relationships seem to require a perpetual drip-feed of electronic reassurances, and our very sense of self is defined increasingly by an unending wait for the verdicts of an innumerable and invisible crowd of virtual judges.

And, while we wait for reactions to the messages we send out, we are bombarded by alerts on our phones and tablets, dogged by apps that measure and share our personal data, and subjected to an inundation of demands to like, retweet, upload, subscribe or buy. The burnt-out case of today belongs to a culture without an off switch.

In previous generations, depression was likely to result from internal conflicts between what we want to do and what authority figures – parents, teachers, institutions – wish to prevent us from doing. But in our high-performance society, it’s feelings of inadequacy, not conflict, that bring on depression. The pressure to be the best workers, lovers, parents and consumers possible leaves us vulnerable to feeling empty and exhausted when we fail to live up to these ideals. In “The Weariness of the Self” (1998), an influential study of modern depression, the French sociologist Alain Ehrenberg argues that in the liberated society which emerged during the 1960s, guilt and obedience play less of a role in the formation of the self than the drive to achieve. The slogan of the “attainment society” is “I can” rather than “I must”.

A more prohibitive society, which tells us we can’t have everything, sets limits on our sense of self. Choose to be a bus conductor and you can’t be a concert pianist; a full-time parent will never be chairman of the board. In our attainment society, we are constantly told that we can be, do and have anyone or anything we want. But, as anyone who’s tried to decide between 22 nearly identical brands of yoghurt in an American organic hypermarket can confirm, limitless choice debilitates far more than it liberates.

The depressive burnout, Ehrenberg suggests, feels incapable of making meaningful choices. This, as we discovered in the course of analysis, is Steve’s predicament. In his emotionally chilly childhood home, the only attention he received from his parents was their rigorous monitoring of his schoolwork and extra-curricular activities. In his own mind, he was worth caring about only because of his achievements. So while he accrued awards and knowledge and skills, he never learned to be curious about who he might be or what he might want in life. Having unthinkingly acquiesced in his parents’ prescription of what was best for him, he simply didn’t know how to deal with, or even make sense of, the sudden, unexpected feeling that the life he was living wasn’t the one for him.

Steve presents an intriguing paradox: what appears from the outside to have been a life driven by the active pursuit of goals feels to him to be oddly inert, a lifeless slotting-in, as he puts it, to a script he didn’t write. “Genuine force of habit”, suggested the great philosophical misanthrope Arthur Schopenhauer in 1851, might appear to be an expression of our innate character, but “really derives from the inertia which wants to spare the intellect the will, the labour, difficulty and sometimes the danger involved in making a fresh choice.” Schopenhauer has a point. Steve is coming to understand that his life followed the shape it did not from the blooming of his deepest desires but because he never bothered to question what he had been told.

“You know”, he said to me one day, “it’s not like I want to be this pathetic loser. I want to get up tomorrow, get back in the gym, find a new job, see people again. But it’s like even as I say I’m gonna do all this, some voice in me says, ‘no I’m not, no way am I doing that.’ And then I can’t work out if I feel depressed or relieved, and the confusion sends me crazy.”

I suggested to him that he was in this position because he had realised that he had almost no hand in choosing his life. His own desire was like a chronically neglected muscle; perhaps our job was to nurture it for the first time, to train it for the task of making basic life choices.

The same predicament arose in a different, perhaps subtler way in Susan, a successful music producer who first came to see me in the thick of an overwhelming depressive episode. She had come from Berlin to London six months previously to take up a new and prestigious job, the latest move in an impressive career that had seen her work in glamorous locations across the world.

She had grown up in a prosperous and loving family in a green English suburb. Unlike Steve, her parents had been – and continued to be – supportive of the unexpected professional and personal path their daughter had carved for herself. But they resembled Steve’s parents in one respect: the unvarying message, communicated through the course of her childhood, that she had the potential to be and do anything. The emotional and financial investment they made in her musical and academic activities showed their willingness to back up their enthusiasm with actions. While Susan appeared to follow her own chosen path, there came a point where her parent’s unstinting support and encouragement made it difficult to identify where their wishes stopped and hers began.

For all their differences, Steve’s and Susan’s parents were alike in protecting the child from awareness of the limits imposed by both themselves and the world. Susan would complain that the present, the life she was living moment to moment, felt unreal to her. Only the future really mattered, for that was where her ideal life resided. “If I just wait a little longer”, she would remark in a tone of wry despondency, “there’ll be this magically transformative event and everything will come right.”

This belief, she had come to realise, had taken a suffocating hold on her life: “the longer I live in wait for this magical event, the more I’m not living this life, which is sad, given it’s the only one I’ve got.” Forever anticipating the arrival of the day that would change her life for ever, Susan had come to view her current existence with a certain contempt, a travesty of the perfect one she might have. Her house, her job, the man she was seeing – all of these were thin shadows of the ideal she was pursuing. But the problem with an ideal is that nothing in reality can ever be remotely comparable to it; it tantalises with a future that can never be attained.

Feeling exhausted and emptied by this chase, she would retreat into two contradictory impulses: the first was a compulsion to work, asking the hydra-headed beast of the office to eat up all her time and mental energy. But alongside this, frequently accompanied by chronic insomnia, was a yearning for the opposite. She would fantasise in our sessions about going home and sleeping, waking only for stretches of blissfully catatonic inactivity over uninterrupted, featureless weeks. Occasionally she managed to steal the odd day to veg out, only for a rising panic to jolt her back into work. In frenzied activity and depressive inertia, she found a double strategy for escaping the inadequacies of the present.

Susan’s depressive exhaustion arose from the disparity between the enormous effort she dedicated to contemplating her future and the much smaller one she devoted to discovering and realising her desires in the present. In this regard, she is the uncanny mirror image of Steve: Susan was frozen by the suspicion there was always something else to choose; Steve was shackled by the incapacity to choose at all.

Psychoanalysis is often criticised for being expensive, demanding and overlong, so it might seem surprising that Susan and Steve chose it over more time-limited, evidence-based and results-oriented behavioural therapies. But results-oriented efficiency may have been precisely the malaise they were trying to escape. Burnout is not simply a symptom of working too hard. It is also the body and mind crying out for an essential human need: a space free from the incessant demands and expectations of the world. In the consulting room, there are no targets to be hit, no achievements to be crossed off. The amelioration of burnout begins in finding your own pool of tranquillity where you can cool off.■

In this article, the clinical cases have been disguised, and the names changed, to protect confidentiality.

Read more: Josh Cohen explains how he helps his patients find the way out of burnout

ILLUSTRATIONS IZHAR COHEN

Socially constructed silence? Protecting policymakers from the unthinkable. (Open Democracy)

The scientific community is profoundly uncomfortable with the storm of political controversy that climate research is attracting. What’s going on?

Paul Hoggett and Rosemary Randall

6 June 2016

    PaulHoggetcroppede.jpg

    Credit: By NASA Scientific Visualization Studio/Goddard Space Flight Center. Public Domain, Wikimedia.org.

    Some things can’t be said easily in polite company. They cause offence or stir up intense anxiety. Where one might expect a conversation, what actually occurs is what the sociologist Eviator Zerubavel calls a ‘socially constructed silence.’

    In his book Don’t Even Think About It,George Marshall argues that after the fiasco of COP 15 at Copenhagen and ‘Climategate’—when certain sections of the press claimed (wrongly as it turned out) that leaked emails of researchers at the University of East Anglia showed that data had been manipulated—climate change became a taboo subject among most politicians, another socially constructed silence with disastrous implications for the future of climate action.

    In 2013-14 we carried out interviews with leading UK climate scientists and communicators to explore how they managed the ethical and emotional challenges of their work. While the shadow of Climategate still hung over the scientific community, our analysis drew us to the conclusion that the silence Marshall spoke about went deeper than a reaction to these specific events.

    Instead, a picture emerged of a community which still identified strongly with an idealised picture of scientific rationality, in which the job of scientists is to get on with their research quietly and dispassionately. As a consequence, this community is profoundly uncomfortable with the storm of political controversy that climate research is now attracting.

    The scientists we spoke to were among a minority who had become engaged with policy makers, the media and the general public about their work. A number of them described how other colleagues would bury themselves in the excitement and rewards of research, denying that they had any responsibility beyond developing models or crunching the numbers. As one researcher put it, “so many scientists just want to do their research and as soon as it has some relevance, or policy implications, or a journalist is interested in their research, they are uncomfortable.”

    We began to see how for many researchers, this idealised picture of scientific practice might also offer protection at an unconscious level from the emotional turbulence aroused by the politicisation of climate change

    In her classic study of the ‘stiff upper lip’ culture of nursing in the UK in the 1950s, the psychoanalyst and social researcher Isobel Menzies Lyth developed the idea of ‘social defences against anxiety,’ and it seems very relevant here. A social defence is an organised but unconscious way of managing the anxieties that are inherent in certain occupational roles. For example, the practice of what was then called the ‘task list’ system fragmented nursing into a number of routines, each one executed by a different person—hence the ‘bed pan nurse’, the ‘catheter nurse’ and so on.

    Ostensibly, this was done to generate maximum efficiency, but it also protected nurses from the emotions that were aroused by any real human involvement with patients, including anxiety, something that was deemed unprofessional by the nursing culture of the time. Like climate scientists, nurses were meant to be objective and dispassionate. But this idealised notion of the professional nurse led to the impoverishment of patient care, and meant that the most emotionally mature nurses were the least likely to complete their training.

    While it’s clear that social defences such as hyper-rationality and specialisation enable climate scientists to get on with their work relatively undisturbed by public anxieties, this approach also generates important problems. There’s a danger that these defences eventually break down and anxiety re-emerges, leaving individuals not only defenceless but with the additional burden of shame and personal inadequacy for not maintaining that stiff upper lip. Stress and burnout may then follow. 

    Although no systematic research has been undertaken in this area, there is anecdotal evidence of such burnout in a number of magazine articles like those by Madeleine Thomas and Faith Kearns, in which climate scientists speak out about the distress that they or others have experienced, their depression at their findings, and their dismay at the lack of public and policy response.

    Even if social defences are successful and anxiety is mitigated, this very success can have unintended consequences. By treating scientific findings as abstracted knowledge without any personal meaning, climate researchers have been slow to take responsibility for their own carbon footprints, thus running the risk of being exposed for hypocrisy by the denialist lobby. One research leader candidly reflected on this failure: “Oh yeah and the other thing [that’s] very, very important I think is that we ought to change the way we do research so we’re sustainable in the research environment, which we’re not now because we fly everywhere for conferences and things.”

    The same defences also contribute to the resistance of most climate scientists to participation in public engagement or intervention in the policy arena, leaving these tasks to a minority who are attacked by the media and even by their own colleagues. One of our interviewees who has played a major role in such engagement recalled being criticised by colleagues for “prostituting science” by exaggerating results in order to make them “look sexy.”You know we’re all on the same side,” she continued, “why are we shooting arrows at each other, it is ridiculous.”

    The social defences of logic, reason and careful debate were of little use to the scientific community in these cases, and their failure probably contributed to internal conflicts and disagreements when anxiety could no longer be contained—so they found expression in bitter arguments instead. This in turn makes those that do engage with the public sphere excessively cautious, which encourages collusion with policy makers who are reluctant to embrace the radical changes that are needed.

    As one scientist put it when discussing the goal agreed at the Paris climate conference of limiting global warming to no more than 2°C: “There is a mentality in [the] group that speaks to policy makers that there are some taboo topics that you cannot talk about. For instance the two degree target on climate change…Well the emissions are going up like this (the scientist points upwards at a 45 degree angle), so two degrees at the moment seems completely unrealistic. But you’re not allowed to say this.”

    Worse still, the minority of scientists who are tempted to break the silence on climate change run the risk of being seen as whistleblowers by their colleagues. Another research leader suggested that—in private—some of the most senior figures in the field believe that the world is heading for a rise in temperature closer to six degrees than two. 

    “So repeatedly I’ve heard from researchers, academics, senior policy makers, government chief scientists, [that] they can’t say these things publicly,” he told us, “I’m sort of deafened, deafened by the silence of most people who work in the area that we work in, in that they will not criticise when there are often evidently very political assumptions that underpin some of the analysis that comes out.”

    It seems that the idea of a ‘socially constructed silence’ may well apply to crucial aspects of the interface between climate scientists and policy makers. If this is the case then the implications are very serious. Despite the hope that COP 21 has generated, many people are still sceptical about whether the rhetoric of Paris will be translated into effective action.

    If climate change work is stuck at the level of  ‘symbolic policy making’—a set of practices designed to make it look as though political elites are doing something while actually doing nothing—then it becomes all the more important for the scientific community to find ways of abandoning the social defences we’ve described and speak out as a whole, rather than leaving the task to a beleaguered and much-criticised minority.

    The rise and fall of peer review (Experimental History)

    experimentalhistory.substack.com

    Adam Mastroianni

    Dec 13, 2022


    Photo cred: my dad

    For the last 60 years or so, science has been running an experiment on itself. The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.

    Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”

    This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

    (Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

    That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.

    Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

    The results are in. It failed. 

    Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.

    Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.

    It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning physics discoveries from the 1990s and 2000s because there aren’t enough of them.

    Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.

    What went wrong?

    Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

    It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

    In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:

    Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.

    (When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)

    The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶

    Here’s another way that we can test whether peer review worked: did it actually earn scientists’ trust? 

    Scientists often say they take peer review very seriously. But people say lots of things they don’t mean, like “It’s great to e-meet you” and “I’ll never leave you, Adam.” If you look at what scientists actually do, it’s clear they don’t think peer review really matters.

    First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal. This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.

    Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place. 

    And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”

    Instead, scientists tacitly agree that peer review adds nothing, and they make up their minds about scientific work by looking at the methods and results. Sometimes people say the quiet part loud, like Nobel laureate Sydney Brenner:

    I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.

    I used to think about all the ways we could improve peer review. Reviewers should look at the data! Journals should make sure that papers aren’t fraudulent! 

    It’s easy to imagine how things could be better—my friend Ethan and I wrote a whole paper on it—but that doesn’t mean it’s easy to make things better. My complaints about peer review were a bit like looking at the ~35,000 Americans who die in car crashes every year and saying “people shouldn’t crash their cars so much.” Okay, but how? 

    Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’t seem to make them any better. Neither does training them. Maybe we can fix some things on the margins, but remember that right now we’re publishing papers that use capital T’s instead of error bars, so we’ve got a long, long way to go.

    What if we made peer review way stricter? That might sound great, but it would make lots of other problems with peer review way worse. 

    For example, you used to be able to write a scientific paper with style. Now, in order to please reviewers, you have to write it like a legal contract. Papers used to begin like, “Help! A mysterious number is persecuting me,” and now they begin like, “Humans have been said, at various times and places, to exist, and even to have several qualities, or dimensions, or things that are true about them, but of course this needs further study (Smergdorf & Blugensnout, 1978; Stikkiwikket, 2002; von Fraud et al., 2018b)”. 

    This blows. And as a result, nobody actually reads these papers. Some of them are like 100 pages long with another 200 pages of supplemental information, and all of it is written like it hates you and wants you to stop reading immediately. Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he. “Whenever someone tells me they loved my paper,” he said, “I say thank you, even though I know they didn’t read it.” Stricter peer review would mean even more boring papers, which means even fewer people would read them.

    Making peer review harsher would also exacerbate the worst problem of all: just knowing that your ideas won’t count for anything unless peer reviewers like them makes you worse at thinking. It’s like being a teenager again: before you do anything, you ask yourself, “BUT WILL PEOPLE THINK I’M COOL?” When getting and keeping a job depends on producing popular ideas, you can get very good at thought-policing yourself into never entertaining anything weird or unpopular at all. That means we end up with fewer revolutionary ideas, and unless you think everything’s pretty much perfect right now, we need revolutionary ideas real bad.

    On the off chance you do figure out a way to improve peer review without also making it worse, you can try convincing the nearly 30,000 scientific journals in existence to apply your magical method to the ~4.7 million articles they publish every year. Good luck!

    Peer review doesn’t work and there’s probably no way to fix it. But a little bit of vetting is better than none at all, right?

    I say: no way. 

    Imagine you discover that the Food and Drug Administration’s method of “inspecting” beef is just sending some guy (“Gary”) around to sniff the beef and say whether it smells okay or not, and the beef that passes the sniff test gets a sticker that says “INSPECTED BY THE FDA.” You’d be pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he’s going to miss most of the dangerous meat. This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.

    That’s what our current system of peer review does, and it’s dangerous. That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?

    If you want to sell a bottle of vitamin C pills in America, you have to include a disclaimer that says none of the claims on the bottle have been evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement on every paper: “NOBODY HAS REALLY CHECKED WHETHER THIS PAPER IS TRUE OR NOT. IT MIGHT BE MADE UP, FOR ALL WE KNOW.” That would at least give people the appropriate level of confidence.

    Why did peer review seem so reasonable in the first place?

    I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.

    But science is a strong-link problem: progress depends on the quality of our best work.Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.

    If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.

    Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.

    Nobody was in charge of our peer review experiment, which means nobody has the responsibility of saying when it’s over. Seeing no one else, I guess I’ll do it: 

    We’re done, everybody! Champagne all around! Great work, and congratulations. We tried peer review and it didn’t work.

    Honesty, I’m so relieved. That system sucked! Waiting months just to hear that an editor didn’t think your paper deserved to be reviewed? Reading long walls of text from reviewers who for some reason thought your paper was the source of all evil in the universe? Spending a whole day emailing a journal begging them to let you use the word “years” instead of always abbreviating it to “y” for no reason (this literally happened to me)? We never have to do any of that ever again.

    I know we all might be a little disappointed we wasted so much time, but there’s no shame in a failed experiment. Yes, we should have taken peer review for a test run before we made it universal. But that’s okay—it seemed like a good idea at the time, and now we know it wasn’t. That’s science! It will always be important for scientists to comment on each other’s ideas, of course. It’s just this particular way of doing it that didn’t work.

    What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.

    Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it. 

    Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lot of comments in particular. So I dunno, I guess that seems like a good way of doing it?

    I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.

    Psi and Science (Psychology Today)

    Why do some scientists refuse to consider the evidence for psi phenomena?

    Original article

    Posted June 17, 2022 | Reviewed by Ekua Hagan

    Key points

    • In a 2018 survey, over half of a sample of Americans reported a psi experience; a 2022 Brazilian survey revealed 70% had a precognitive dream.
    • Some scientists will not engage with the evidence for psi due to scientism.
    • The ideology of “scientism” is often associated with science, but leads to a lack of open-mindedness, which is contrary to true science.

    Psi phenomena, like telepathy and precognition, are controversial in academia. While a minority of academics (such as me) are open-minded about them, others believe that they are pseudo-scientific and that they can’t possibly exist because they contravene the laws of science.

    However, the phenomena are much less controversial to the general public. Surveys show significant levels of belief in psi. A survey of 1200 Americans in 2003 found that over 60% believed in extrasensory perception.1

    This high level of belief appears to stem largely from experience. In a 2018 survey, half of a sample of Americans reported they had an experience of feeling “as though you were in touch with someone when they were far away.” Slightly less than half reported an experience of knowing “something about the future that you had no normal way to know” (in other words, precognition). Just over 40% reported that they had received important information through their dreams.2

    Interestingly, a 2022 survey of over 1000 Brazilian people found higher levels of such anomalous experiences, with 70% reporting they had a precognitive dream at least once.3 This may imply that such experiences are more likely to be reported in Brazil, perhaps due to a cultural climate of greater openness.

    How can we account for the disconnect between the dismissal of psi phenomena by some scientists, and the openness of the general population? Is it that scientists are more educated and rational than other sections of the population, many of whom are gullible to superstition and irrational thinking?

    I don’t think it’s as simple as this.

    Evidence for Psi

    You might be surprised to learn that the evidence for phenomena such as telepathy and precognition is strong. As I point out in my book, Spiritual Science, this evidence has remained significant and robust over a massive range of studies over decades.

    In 2018, American Psychologist published an article by Professor Etzel Cardeña which carefully and systemically reviewed the evidence for psi phenomena, examining over 750 discrete studies. Cardeña concluded that there was a very strong case for the existence of psi, writing that the evidence was “comparable to that for established phenomena in psychology and other disciplines.”4

    For example, from 1974 to 2018, 117 experiments were reported using the “Ganzfeld” procedure, in which one participant attempts to “send” information about images to another distant person. An overall analysis of the results showed a “hit rate” many millions of times higher than chance. Factors such as selective reporting bias (the so-called “file drawer effect”) and variations in experimental quality could not account for the results. Moreover, independent researchers reported statistically identical results.5

    So why do some scientists continue to believe that there is no evidence for psi? In my view, the explanation lies in an ideology that could be called “scientism.”

    Scientism

    Scientism is an ideology that is often associated with science. It consists of a number of basic ideas, which are often stated as facts, even though they are just assumptions—e.g., that the world is purely physical in nature, that human consciousness is a product of brain activity, that human beings are biological machines whose behaviour is determined by genes, that anomalous phenomena such as near-death experiences and psi are unreal, and so on.

    Adherents to scientism see themselves as defenders of reason. They see themselves as part of a historical “enlightenment project” whose aim is to overcome superstition and irrationality. In particular, they see themselves as opponents of religion.

    It’s therefore ironic that scientism has become a quasi-religion in itself. In their desire to spread their ideology, adherents to scientism often behave like religious zealots, demonising unwelcome ideas and disregarding any evidence that doesn’t fit with their worldview. They apply their notion of rationality in an extremist way, dismissing any phenomena outside their belief system as “woo.” Scientifically evidential phenomena such as telepathy and precognition are placed in the same category as creationism and conspiracy theories.

    One example was a response to Eztel Cardeña’s American Psychologist article (cited above) by the longstanding skeptics Arthur Reber and James Alcock. Aiming to rebut Cardeña’s claims of the strong evidence for psi, they decided that their best approach was not to actually engage with the evidence, but simply to insist that it couldn’t possibly be valid because psi itself was theoretically impossible. As they wrote, “Claims made by parapsychologists cannot be true … Hence, data that suggest that they can are necessarily flawed and result from weak methodology or improper data analyses.”6

    A similar strategy was used by the psychologist Marija Branković in a recent paper in The European Journal of Psychology. After discussing a series of highly successful precognition studies by the researcher Daryl Bem, she dismisses them because three investigators were unable to replicate the findings.7 Branković neglects to mention that there have been 90 other replication attempts with a massively significant overall success rate, exceeding the standard of “decisive evidence” by a factor of 10 million.8

    Beyond Scientism

    It’s worth considering for a moment whether psi really does contravene the laws of physics (or science), as many adherents to scientism suggest. For me, this is one of the most puzzling claims made by skeptics. Tellingly, the claim is often made by psychologists, whose knowledge of modern science may not be deep.

    Anyone with a passing knowledge of some of the theories of modern physics—particularly quantum physics—is aware that reality is much stranger than it appears to common sense. There are many theories that suggest that our common-sense view of linear time may be false. There are many theories that suggest that our world is essentially “non-local,” including phenomena such as “entanglement” and “action at a distance.” I think it would be too much of a stretch to suggest that such theories explain precognition and telepathy, but they certainly allow for their possibility.

    A lot of people assume that if you’re a scientist, then you must automatically subscribe to scientism. But in fact, scientism is the opposite of true science. The academics who dismiss psi on the grounds that it “can’t possibly be true” are behaving in the same way as the fundamentalist Christians who refuse to consider the evidence for evolution. Skeptics who refuse to engage with the evidence for telepathy or precognition are acting in the same way as the contemporaries of Galileo who refused to look through his telescope, unwilling to face the possibility that their beliefs may need to be revised.

    References

    1. Wahbeh H, Radin D, Mossbridge J, Vieten C, Delorme A. Exceptional experiences reported by scientists and engineers. Explore (NY). 2018 Sep;14(5):329-341. doi: 10.1016/j.explore.2018.05.002. Epub 2018 Aug 2. PMID: 30415782.

    2. Rice TW. Believe It Or Not: Religious and Other Paranormal Beliefs in the United States. J Sci Study Relig. 2003;42(1):95-106. doi:10.1111/1468-5906.00163

    3. Monteiro de Barros MC, Leão FC, Vallada Filho H, Lucchetti G, Moreira-Almeida A, Prieto Peres MF. Prevalence of spiritual and religious experiences in the general population: A Brazilian nationwide study. Transcultural Psychiatry. April 2022. doi:10.1177/13634615221088701

    4. Cardeña, E. (2018). The experimental evidence for parapsychological phenomena: A review. American Psychologist, 73(5), 663–677. https://doi.org/10.1037/amp0000236

    5. Storm L, Tressoldi P. Meta-analysis of free-response studies 2009-2018: Assessing the noise-reduction model ten years on. J Soc Psych Res. 2020;(84):193-219.

    6. Reber, A. S., & Alcock, J. E. (2020). Searching for the impossible: Parapsychology’s elusive quest. American Psychologist, 75(3), 391–399. https://doi.org/10.1037/amp0000486

    7. Branković M. Who Believes in ESP: Cognitive and Motivational Determinants of the Belief in Extra-Sensory Perception. Eur J Psychol. 2019;15(1):120-139. doi:10.5964/ejop.v15i1.1689

    8. Bem D, Tressoldi P, Rabeyron T, Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Research. 2015;4:1188. doi:10.12688/f1000research.7177.2

    Why is climate ‘doomism’ going viral – and who’s fighting it? (BBC)

    bbc.com

    23 May 2022


    By Marco Silva
    BBC climate disinformation specialist

    Illustration of two hands holding electronic devices showing melting planets.

    Climate “doomers” believe the world has already lost the battle against global warming. That’s wrong – and while that view is spreading online, there are others who are fighting the viral tide.

    As he walked down the street wearing a Jurassic Park cap, Charles McBryde raised his smartphone, stared at the camera, and hit the record button.

    “Ok, TikTok, I need your help.”

    Charles is 27 and lives in California. His quirky TikTok videos about news, history, and politics have earned him more than 150,000 followers.

    In the video in question, recorded in October 2021, he decided it was time for a confession.

    “I am a climate doomer,” he said. “Since about 2019, I have believed that there’s little to nothing that we can do to actually reverse climate change on a global scale.”

    Climate doomism is the idea that we are past the point of being able to do anything at all about global warming – and that mankind is highly likely to become extinct.

    That’s wrong, scientists say, but the argument is picking up steam online.

    Still from one of Charles McBryde's videos on TikTok
    Image caption, “I am a climate doomer,” Charles McBryde told his TikTok followers last October
    ‘Give me hope’

    Charles admitted to feeling overwhelmed, anxious and depressed about global warming, but he followed up with a plea.

    “I’m calling on the activists and the scientists of TikTok to give me hope,” he said. “Convince me that there’s something out there that’s worth fighting for, that in the end we can achieve victory over this, even if it’s only temporary.”

    And it wasn’t long before someone answered.

    Facing up to the ‘doomers’

    Alaina Wood is a sustainability scientist based in Tennessee. On TikTok she’s known as thegarbagequeen.

    After watching Charles’ video, she posted a reply, explaining in simple terms why he was wrong.

    Alaina makes a habit of challenging climate doomism – a mission she has embraced with a sense of urgency.

    “People are giving up on activism because they’re like, ‘I can’t handle it any more… This is too much…’ and ‘If it really is too late, why am I even trying?'” she says. “Doomism ultimately leads to climate inaction, which is the opposite of what we want.”

    Sustainability scientist and TikToker Alaina Wood
    Image caption, Sustainability scientist and TikToker Alaina Wood is on a mission to reassure people it is not too late for the climate
    Why it’s not too late

    Climate scientist Dr Friederike Otto, who has been working with the UN’s Intergovernmental Panel on Climate Change, says: “I don’t think it’s helpful to pretend that climate change will lead to humanity’s extinction.”

    In its most recent report, the IPCC laid out a detailed plan that it believes could help the world avoid the worst impacts of rising temperatures.

    It involves “rapid, deep and immediate” cuts in emissions of greenhouse gases – which trap the sun’s heat and make the planet hotter.

    “There is no denying that there are large changes across the globe, and that some of them are irreversible,” says Dr Otto, a senior lecturer in climate science at the Grantham Institute for Climate Change and the Environment.

    “It doesn’t mean the world is going to end – but we have to adapt, and we have to stop emitting.”

    People carry a sign as they attend a protest during the UN Climate Change Conference COP26 in Glasgow.
    Fertile ground

    Last year, the Pew Research Center in the US ran a poll covering 17 countries, focusing on attitudes towards climate change.

    An overwhelming majority of the respondents said they were willing to change the way they lived to tackle the problem.

    But when asked how confident they were that climate action would significantly reduce the effects of global warming, more than half said they had little to no confidence.

    Doomism taps into, and exaggerates, that sense of hopelessness. In Charles’s case, it all began with a community on Reddit devoted to the potential collapse of civilisation.

    “The most apocalyptic language that I would find was actually coming from former climate scientists,” Charles says.

    It’s impossible to know whether the people posting the messages Charles read were genuine scientists.

    But the posts had a profound effect on him. He admits: “I do think I fell down the rabbit hole.”

    Alaina Wood, the sustainability scientist, says Charles’s story is not unusual.

    “I rarely at this point encounter climate denial or any other form of misinformation [on social media],” she says. “It’s not people saying, ‘Fossil fuels don’t cause climate change’ … It’s people saying, ‘It’s too late’.”

    TikTok’s rules forbid misinformation that causes harm. We sent the company some videos that Alaina has debunked in the past. None was found to have violated the rules.

    TikTok says it works with accredited fact-checkers to “limit the spread of false or misleading climate information”.

    Young and pessimistic

    Although it can take many forms (and is thus difficult to accurately measure), Alaina says doomism is particularly popular among young people.

    “There’s people who are climate activists and they’re so scared. They want to make change, but they feel they need to spread fear-based content to do so,” she says.

    “Then there are people who know that fear in general goes viral, and they’re just following trends, even if they don’t necessarily understand the science.”

    I’ve watched several of the videos that she debunked. Invariably, they feature young users voicing despair about the future.

    “Let me tell you why I don’t know what I want to do with my life and why I’m not planning,” says one young woman. “By the year 2050, most of us should be underwater from global warming.” But that’s a gross exaggeration of what climate scientists are actually telling us.

    “A lot of that is often fatalistic humour, but people on TikTok are interpreting that as fact,” Alaina says.

    But is Charles still among them, after watching Alaina’s debunks? Is he still a climate doomer?

    “I would say no,” he tells me. “I have convinced myself that we can get out of this.”

    Opinião – Pablo Acosta: Ciências comportamentais podem complementar forma tradicional de fazer política (Folha de S.Paulo)

    www1.folha.uol.com.br

    22.fev.2022 às 4h00


    Tradicionalmente, os gestores elaboram políticas públicas tendo como base um agente econômico racional, ou seja, uma pessoa capaz de avaliar cada decisão, maximizando sua utilidade para interesse próprio. Ignoram, porém, as poderosas influências psicológicas e sociais que afetam o comportamento humano e desconsideram que pessoas são falíveis, inconstantes e emocionais: têm problemas com autocontrole, procrastinam, preferem o status quo e são seres sociais. É com base nesse agente “não tão racional” que as ciências comportamentais se apresentam para complementar a forma tradicional de fazer política.

    Por exemplo: já nos aproximamos da marca de dois anos desde a declaração pela Organização Mundial da Saúde de estado de pandemia da Covid-19 em 11 de março de 2020. Foram anos desafiadores para governos, empresas e indivíduos. Mas apesar de 2021 ter apresentado sinais de recuperação, há ainda um longo e árduo caminho a ser percorrido para retornar ao menos às condições pré-pandemia. Não apenas na saúde, mas também no equilíbrio das economias, no aumento da produtividade, na retomada de empregos, na recuperação das lacunas de aprendizagem, na melhora do ambiente de negócios, no combate às mudanças climáticas, etc. Obviamente, essa não é uma tarefa simples para governos e organizações. Poderíamos encarar esses desafios de forma diferente e adaptar a maneira de fazer políticas públicas para torná-las mais eficientes e custo-efetivas, aumentando seus impactos e alcance?

    A resposta é sim. O sucesso de políticas públicas depende, em parte, da tomada de decisão e da mudança de comportamentos. Por isso, focar mais nas pessoas e no contexto da tomada de decisão se torna cada vez mais imperativo. É importante considerar como pessoas se relacionam entre si e com instituições, como se portam frente às políticas e conhecer bem o ambiente em que estão inseridas.

    A abordagem comportamental é científica e alia conceitos da psicologia, economia, antropologia, sociologia e neurociência. Orientada pelo contexto e baseada em evidências, concilia teoria e prática em diversos setores. Sua aplicação pode abranger uma simples mudança no ambiente da tomada de decisão (arquitetura de escolhas), um “empurrãozinho” (nudge) para influenciar a melhor decisão para o indivíduo, mantendo liberdade de escolhas, e pode ser mais ampla, visando a mudança de hábito. Para além disso, pode ser chave no enfrentamento de desafios de políticas como abandono escolar, violência doméstica e de gênero, pagamento de impostos, redução de corrupção, desastres naturais, mudanças climáticas, entre outros.

    O uso de insights comportamentais em políticas públicas já não é mais novidade. Mais de uma década se passou desde a publicação (2008) do livro Nudge (“Nudge: como tomar melhores decisões sobre saúde, dinheiro e felicidade”, em português), que impulsionou o campo de forma espetacular. Conceitos da psicologia, já amplamente discutidos e aceitos por décadas, foram utilizados no contexto das decisões econômicas e, assim, a economia/ciência comportamental se consolidou.

    Acompanhando a expansão e relevância do tema, o Banco Mundial, lançou em 2015 o Relatório sobre o Desenvolvimento Mundial: Mente, Sociedade e Comportamento. Em 2016, iniciou sua própria unidade comportamental, a eMBeD (Unidade Mente, Comportamento e Desenvolvimento) e tem promovido o uso sistemático de insights comportamentais em políticas e projetos de desenvolvimento e apoiado diversos países para solucionar problemas de forma rápida e escalável.

    No Brasil, temos atuado na capacitação de gestores para o uso de insights comportamentais, em contribuições em pesquisas, como na Pesquisa sobre Ética e Corrupção no Serviço Público Federal (Banco Mundial e CGU) e em apoio técnico na identificação de evidências, como para informar soluções para aumentar a poupança entre a população de baixa renda. Nossos especialistas prepararam também diagnósticos comportamentais para entender por que clientes não pagam a conta em dia ou deixam de se conectar ao sistema de esgoto. Realizamos experimentos com mensagens comportamentais a fim de estimular a utilização de meios digitais de pagamentos e incentivar o pagamento de contas em dia no setor de água e saneamento. Neste último, apresentando resultados positivos com possibilidade de aumento de arrecadação a um custo baixo, já que as mensagens ressaltando consequências e reciprocidade, por exemplo, aumentaram os pagamentos em dia e a quantia total paga. Para cada mil clientes que receberam o SMS com insights comportamentais, de seis a 11 clientes a mais pagaram as contas. Para 2022, há atividades planejadas, como parte de um projeto de desenvolvimento, que usará insights comportamentais para reduzir o descarte de resíduos em sistemas de drenagem e aumentar o uso consciente de espaços públicos.

    As ciências comportamentais não são a solução para os grandes desafios globais. Mas é preciso ressaltar o potencial de sua complementariedade na construção de políticas públicas. Cabe aos gestores aproveitarem esse momento de maior maturidade da área para expandirem seus conhecimentos. Vale ainda surfar na onda de ascensão de áreas complementares, como cesign e ciência de dados, para centrar o olhar no indivíduo e no contexto da decisão e, baseando-se em evidências e de maneira transparente, influenciar as escolhas e promover mudança de comportamento, de forma a aumentar o impacto das políticas públicas a fim de não só retomar as condições pré-Covid, mas melhorar ainda mais a vida e o bem-estar de todos, especialmente da população mais pobre e vulnerável.

    Esta coluna foi escrita em colaboração com meus colegas do Banco Mundial Juliana Neves Soares Brescianini, analista de operações, e Luis A. Andrés, líder de programa do setor de Infraestrutura.

    Climate Change Enters the Therapy Room (New York Times)

    nytimes.com

    Ellen Barry


    Alina Black, a mother of two in Portland, Ore., sought a therapist who specialized in climate anxiety to address her mounting panics. “I feel like I have developed a phobia to my way of life,” she said.
    Alina Black, a mother of two in Portland, Ore., sought a therapist who specialized in climate anxiety to address her mounting panics. “I feel like I have developed a phobia to my way of life,” she said. Credit: Mason Trinca for The New York Times
    Ten years ago, psychologists proposed that a wide range of people would suffer anxiety and grief over climate. Skepticism about that idea is gone.

    Published Feb. 6, 2022; Updated Feb. 7, 2022

    PORTLAND, Ore. — It would hit Alina Black in the snack aisle at Trader Joe’s, a wave of guilt and shame that made her skin crawl.

    Something as simple as nuts. They came wrapped in plastic, often in layers of it, that she imagined leaving her house and traveling to a landfill, where it would remain through her lifetime and the lifetime of her children.

    She longed, really longed, to make less of a mark on the earth. But she had also had a baby in diapers, and a full-time job, and a 5-year-old who wanted snacks. At the age of 37, these conflicting forces were slowly closing on her, like a set of jaws.

    In the early-morning hours, after nursing the baby, she would slip down a rabbit hole, scrolling through news reports of droughts, fires, mass extinction. Then she would stare into the dark.

    It was for this reason that, around six months ago, she searched “climate anxiety” and pulled up the name of Thomas J. Doherty, a Portland psychologist who specializes in climate.

    A decade ago, Dr. Doherty and a colleague, Susan Clayton, a professor of psychology at the College of Wooster, published a paper proposing a new idea. They argued that climate change would have a powerful psychological impact — not just on the people bearing the brunt of it, but on people following it through news and research. At the time, the notion was seen as speculative.

    That skepticism is fading. Eco-anxiety, a concept introduced by young activists, has entered a mainstream vocabulary. And professional organizations are hurrying to catch up, exploring approaches to treating anxiety that is both existential and, many would argue, rational.

    Though there is little empirical data on effective treatments, the field is expanding swiftly. The Climate Psychology Alliance provides an online directory of climate-aware therapists; the Good Grief Network, a peer support network modeled on 12-step addiction programs, has spawned more than 50 groups; professional certification programs in climate psychology have begun to appear.

    As for Dr. Doherty, so many people now come to him for this problem that he has built an entire practice around them: an 18-year-old student who sometimes experiences panic attacks so severe that she can’t get out of bed; a 69-year-old glacial geologist who is sometimes overwhelmed with sadness when he looks at his grandchildren; a man in his 50s who erupts in frustration over his friends’ consumption choices, unable to tolerate their chatter about vacations in Tuscany.

    The field’s emergence has met resistance, for various reasons. Therapists have long been trained to keep their own views out of their practices. And many leaders in mental health maintain that anxiety over climate change is no different, clinically, from anxiety caused by other societal threats, like terrorism or school shootings. Some climate activists, meanwhile, are leery of viewing anxiety over climate as dysfunctional thinking — to be soothed or, worse, cured.

    But Ms. Black was not interested in theoretical arguments; she needed help right away.

    She was no Greta Thunberg type, but a busy, sleep-deprived working mom. Two years of wildfires and heat waves in Portland had stirred up something sleeping inside her, a compulsion to prepare for disaster. She found herself up at night, pricing out water purification systems. For her birthday, she asked for a generator.

    She understands how privileged she is; she describes her anxiety as a “luxury problem.” But still: The plastic toys in the bathtub made her anxious. The disposable diapers made her anxious. She began to ask herself, what is the relationship between the diapers and the wildfires?

    “I feel like I have developed a phobia to my way of life,” she said.

    Thomas Doherty in Portland, Ore. He specializes in distress related to climate disaster, or ecopsychology, which was, as he put it,  a “woo-woo area” until recently.
    Credit: Mason Trinca for The New York Times

    Last fall, Ms. Black logged on for her first meeting with Dr. Doherty, who sat, on video, in front of a large, glossy photograph of evergreens.

    At 56, he is one of the most visible authorities on climate in psychotherapy, and he hosts a podcast, “Climate Change and Happiness.” In his clinical practice, he reaches beyond standard treatments for anxiety, like cognitive behavioral therapy, to more obscure ones, like existential therapy, conceived to help people fight off despair, and ecotherapy, which explores the client’s relationship to the natural world.

    He did not take the usual route to psychology; after graduating from Columbia University, he hitchhiked across the country to work on fishing boats in Alaska, then as a whitewater rafting guide — “the whole Jack London thing” — and as a Greenpeace fund-raiser. Entering graduate school in his 30s, he fell in naturally with the discipline of “ecopsychology.”

    At the time, ecopsychology was, as he put it, a “woo-woo area,” with colleagues delving into shamanic rituals and Jungian deep ecology. Dr. Doherty had a more conventional focus, on the physiological effects of anxiety. But he had picked up on an idea that was, at that time, novel: that people could be affected by environmental decay even if they were not physically caught in a disaster.

    Recent research has left little doubt that this is happening. A 10-country survey of 10,000 people aged 16 to 25 published last month in The Lancet found startling rates of pessimism. Forty-five percent of respondents said worry about climate negatively affected their daily life. Three-quarters said they believed “the future is frightening,” and 56 percent said “humanity is doomed.”

    The blow to young people’s confidence appears to be more profound than with previous threats, such as nuclear war, Dr. Clayton said. “We’ve definitely faced big problems before, but climate change is described as an existential threat,” she said. “It undermines people’s sense of security in a basic way.”

    Caitlin Ecklund, 37, a Portland therapist who finished graduate school in 2016, said that nothing in her training — in subjects like buried trauma, family systems, cultural competence and attachment theory — had prepared her to help the young women who began coming to her describing hopelessness and grief over climate. She looks back on those first interactions as “misses.”

    “Climate stuff is really scary, so I went more toward soothing or normalizing,” said Ms. Ecklund, who is part of a group of therapists convened by Dr. Doherty to discuss approaches to climate. It has meant, she said, “deconstructing some of that formal old-school counseling that has implicitly made things people’s individual problems.”

    Many of Dr. Doherty’s clients sought him out after finding it difficult to discuss climate with a previous therapist.

    Caroline Wiese, 18, described her previous therapist as “a typical New Yorker who likes to follow politics and would read The New York Times, but also really didn’t know what a Keeling Curve was,” referring to the daily record of carbon dioxide concentration.

    Ms. Wiese had little interest in “Freudian B.S.” She sought out Dr. Doherty for help with a concrete problem: The data she was reading was sending her into “multiday panic episodes” that interfered with her schoolwork.

    In their sessions, she has worked to carefully manage what she reads, something she says she needs to sustain herself for a lifetime of work on climate. “Obviously, it would be nice to be happy,” she said, “but my goal is more to just be able to function.”

    Frank Granshaw, 69, a retired professor of geology, wanted help hanging on to what he calls “realistic hope.”

    He recalls a morning, years ago, when his granddaughter crawled into his lap and fell asleep, and he found himself overwhelmed with emotion, considering the changes that would occur in her lifetime. These feelings, he said, are simply easier to unpack with a psychologist who is well versed on climate. “I appreciate the fact that he is dealing with emotions that are tied into physical events,” he said.

    As for Ms. Black, she had never quite accepted her previous therapist’s vague reassurances. Once she made an appointment with Dr. Doherty, she counted the days. She had a wild hope that he would say something that would simply cause the weight to lift.

    That didn’t happen. Much of their first session was devoted to her doomscrolling, especially during the nighttime hours. It felt like a baby step.

    “Do I need to read this 10th article about the climate summit?” she practiced asking herself. “Probably not.”

    Several sessions came and went before something really happened.

    Ms. Black remembers going into an appointment feeling distraught. She had been listening to radio coverage of the international climate summit in Glasgow last fall and heard a scientist interviewed. What she perceived in his voice was flat resignation.

    That summer, Portland had been trapped under a high-pressure system known as a “heat dome,” sending temperatures to 116 degrees. Looking at her own children, terrible images flashed through her head, like a field of fire. She wondered aloud: Were they doomed?

    Dr. Doherty listened quietly. Then he told her, choosing his words carefully, that the rate of climate change suggested by the data was not as swift as what she was envisioning.

    “In the future, even with worst-case scenarios, there will be good days,” he told her, according to his notes. “Disasters will happen in certain places. But, around the world, there will be good days. Your children will also have good days.”

    At this, Ms. Black began to cry.

    She is a contained person — she tends to deflect frightening thoughts with dark humor — so this was unusual. She recalled the exchange later as a threshold moment, the point when the knot in her chest began to loosen.

    “I really trust that when I hear information from him, it’s coming from a deep well of knowledge,” she said. “And that gives me a lot of peace.”

    Dr. Doherty recalled the conversation as “cathartic in a basic way.” It was not unusual, in his practice; many clients harbor dark fears about the future and have no way to express them. “It is a terrible place to be,” he said.

    A big part of his practice is helping people manage guilt over consumption: He takes a critical view of the notion of a climate footprint, a construct he says was created by corporations in order to shift the burden to individuals.

    He uses elements of cognitive behavioral therapy, like training clients to manage their news intake and look critically at their assumptions.

    He also draws on logotherapy, or existential therapy, a field founded by Viktor E. Frankl, who survived German concentration camps and then wrote “Man’s Search for Meaning,” which described how prisoners in Auschwitz were able to live fulfilling lives.

    “I joke, you know it’s bad when you’ve got to bring out the Viktor Frankl,” he said. “But it’s true. It is exactly right. It is of that scale. It is that consolation: that ultimately I make meaning, even in a meaningless world.”

    At times, over the last few months, Ms. Black could feel some of the stress easing.

    On weekends, she practices walking in the woods with her family without allowing her mind to flicker to the future. Her conversations with Dr. Doherty, she said, had “opened up my aperture to the idea that it’s not really on us as individuals to solve.”

    Sometimes, though, she’s not sure that relief is what she wants. Following the news about the climate feels like an obligation, a burden she is meant to carry, at least until she is confident that elected officials are taking action.

    Her goal is not to be released from her fears about the warming planet, or paralyzed by them, but something in between: She compares it to someone with a fear of flying, who learns to manage their fear well enough to fly.

    “On a very personal level,” she said, “the small victory is not thinking about this all the time.”

    Another tool in the fight against climate change: storytelling (MIT Technology Review)

    technologyreview.com

    Stories may be the most overlooked climate solution of all. By

    December 23, 2021

    Devi Lockwood

    There is a lot of shouting about climate change, especially in North America and Europe. This makes it easy for the rest of the world to fall into a kind of silence—for Westerners to assume that they have nothing to add and should let the so-called “experts” speak. But we all need to be talking about climate change and amplifying the voices of those suffering the most. 

    Climate science is crucial, but by contextualizing that science with the stories of people actively experiencing climate change, we can begin to think more creatively about technological solutions.

    This needs to happen not only at major international gatherings like COP26, but also in an everyday way. In any powerful rooms where decisions are made, there should be people who can speak firsthand about the climate crisis. Storytelling is an intervention into climate silence, an invitation to use the ancient human technology of connecting through language and narrative to counteract inaction. It is a way to get often powerless voices into powerful rooms. 

    That’s what I attempted to do by documenting stories of people already experiencing the effects of a climate in crisis. 

    In 2013, I was living in Boston during the marathon bombing. The city was put on lockdown, and when it lifted, all I wanted was to go outside: to walk and breathe and hear the sounds of other people. I needed to connect, to remind myself that not everyone is murderous. In a fit of inspiration, I cut open a broccoli box and wrote “Open call for stories” in Sharpie. 

    I wore the cardboard sign around my neck. People mostly stared. But some approached me. Once I started listening to strangers, I didn’t want to stop. 

    That summer, I rode my bicycle down the Mississippi River on a mission to listen to any stories that people had to share. I brought the sign with me. One story was so sticky that I couldn’t stop thinking about it for months, and it ultimately set me off on a trip around the world.

    “We fight for the protection of our levees. We fight for our marsh every time we have a hurricane. I couldn’t imagine living anywhere else.” 

    I met 57-year-old Franny Connetti 80 miles south of New Orleans, when I stopped in front of her office to check the air in my tires; she invited me in to get out of the afternoon sun. Franny shared her lunch of fried shrimp with me. Between bites she told me how Hurricane Isaac had washed away her home and her neighborhood in 2012. 

    Despite that tragedy, she and her husband moved back to their plot of land, in a mobile home, just a few months after the storm.

    “We fight for the protection of our levees. We fight for our marsh every time we have a hurricane,” she told me. “I couldn’t imagine living anywhere else.” 

    Twenty miles ahead, I could see where the ocean lapped over the road at high tide. “Water on Road,” an orange sign read. Locals jokingly refer to the endpoint of Louisiana State Highway 23 as “The End of the World.” Imagining the road I had been biking underwater was chilling.

    Devi with sign
    The author at Monasavu Dam in Fiji in 2014.

    Here was one front line of climate change, one story. What would it mean, I wondered, to put this in dialogue with stories from other parts of the world—from other front lines with localized impacts that were experienced through water? My goal became to listen to and amplify those stories.

    Water is how most of the world will experience climate change. It’s not a human construct, like a degree Celsius. It’s something we acutely see and feel. When there’s not enough water, crops die, fires rage, and people thirst. When there’s too much, water becomes a destructive force, washing away homes and businesses and lives. It’s almost always easier to talk about water than to talk about climate change. But the two are deeply intertwined.

    I also set out to address another problem: the language we use to discuss climate change is often abstract and inaccessible. We hear about feet of sea-level rise or parts per million of carbon dioxide in the atmosphere, but what does this really mean for people’s everyday lives? I thought storytelling might bridge this divide. 

    One of the first stops on my journey was Tuvalu, a low-lying coral atoll nation in the South Pacific, 585 miles south of the equator. Home to around 10,000 people, Tuvalu is on track to become uninhabitable in my lifetime. 

    In 2014 Tauala Katea, a meteorologist, opened his computer to show me an image of a recent flood on one island. Seawater had bubbled up under the ground near where we were sitting. “This is what climate change looks like,” he said. 

    “In 2000, Tuvaluans living in the outer islands noticed that their taro and pulaka crops were suffering,” he said. “The root crops seemed rotten, and the size was getting smaller and smaller.” Taro and pulaka, two starchy staples of Tuvaluan cuisine, are grown in pits dug underground. 

    Tauala and his team traveled to the outer islands to take soil samples. The culprit was saltwater intrusion linked to sea-level rise. The seas have been rising four millimeters per year since measurements began in the early 1990s. While that might sound like a small amount, this change has a dramatic impact on Tuvaluans’ access to drinking water. The highest point is only 13 feet above sea level.

    A lot has changed in Tuvalu as a result. The freshwater lens, a layer of groundwater that floats above denser seawater, has become salty and contaminated. Thatched roofs and freshwater wells are now a thing of the past. Each home now has a water tank attached to a corrugated-­iron roof by a gutter. All the water for washing, cooking, and drinking now comes from the rain. This rainwater is boiled for drinking and used to wash clothes and dishes, as well as for bathing. The wells have been repurposed as trash heaps. 

    At times, families have to make tough decisions about how to allocate water. Angelina, a mother of three, told me that during a drought  a few years ago, her middle daughter, Siulai, was only a few months old. She, her husband, and their oldest daughter could swim in the sea to wash themselves and their clothes. “We only saved water to drink and cook,” she said. But her newborn’s skin was too delicate to bathe in the ocean. The salt water would give her a horrible rash. That meant Angelina had to decide between having water to drink and to bathe her child.

    The stories I heard about water and climate change in Tuvalu reflected a sharp division along generational lines. Tuvaluans my age—like Angelina—don’t see their future on the islands and are applying for visas to live in New Zealand. Older Tuvaluans see climate change as an act of God and told me they couldn’t imagine living anywhere else; they didn’t want to leave the bones of their ancestors, which were buried in their front yards. Some things just cannot be moved. 

    Organizations like the United Nations Development Programme are working to address climate change in Tuvalu by building seawalls and community water tanks. Ultimately these adaptations seem to be prolonging the inevitable. It is likely that within my lifetime, many Tuvaluans will be forced to call somewhere else home. 

    Tuvalu shows how climate change exacerbates both food and water insecurity—and how that insecurity drives migration. I saw this in many other places. Mess with the amount of water available in one location, and people will move.

    In Thailand I met a modern dancer named Sun who moved to Bangkok from the rural north. He relocated to the city in part to practice his art, but also to take refuge from unpredictable rain patterns. Farming in Thailand is governed by the seasonal monsoons, which dump rain, fill river basins, and irrigate crops from roughly May to September. Or at least they used to. When we spoke in late May 2016, it was dry in Thailand. The rains were delayed. Water levels in the country’s biggest dams plummeted to less than 10% of their capacity—the worst drought in two decades.

    “Right now it’s supposed to be the beginning of the rainy season, but there is no rain,” Sun told me. “How can I say it? I think the balance of the weather is changing. Some parts have a lot of rain, but some parts have none.” He leaned back in his chair, moving his hands like a fulcrum scale to express the imbalance. “That is the problem. The people who used to be farmers have to come to Bangkok because they want money and they want work,” he said. “There is no more work because of the weather.” 

    family under sign in Nunavut
    A family celebrates Nunavut Day near the waterfront in Igloolik, Nunavut, in 2018.

    Migration to the city, in other words, is hastened by the rain. Any tech-driven climate solutions that fail to address climate migration—so central to the personal experience of Sun and many others in his generation around the world—will be at best incomplete, and at worst potentially dangerous. Solutions that address only one region, for example, could exacerbate migration pressures in another. 

    I heard stories about climate-­driven food and water insecurity in the Arctic, too. Igloolik, Nunavut, 1,400 miles south of the North Pole, is a community of 1,700 people. Marie Airut, a 71-year-old elder, lives by the water. We spoke in her living room over cups of black tea.

    “My husband died recently,” she told me. But when he was alive, they went hunting together in every season; it was their main source of food. “I’m not going to tell you what I don’t know. I’m going to tell you only the things that I have seen,” she said. In the 1970s and ’80s, the seal holes would open in late June, an ideal time for hunting baby seals. “But now if I try to go out hunting at the end of June, the holes are very big and the ice is really thin,” Marie told me. “The ice is melting too fast. It doesn’t melt from the top; it melts from the bottom.”

    When the water is warmer, animals change their movement. Igloolik has always been known for its walrus hunting. But in recent years, hunters have had trouble reaching the animals. “I don’t think I can reach them anymore, unless you have 70 gallons of gas. They are that far now, because the ice is melting so fast,” Marie said. “It used to take us half a day to find walrus in the summer, but now if I go out with my boys, it would probably take us two days to get some walrus meat for the winter.” 

    Marie and her family used to make fermented walrus every year, “but this year I told my sons we’re not going walrus hunting,” she said. “They are too far.”

    Devi Lockwood is the Ideas editor at Rest of World and the author of 1,001 Voices on Climate Change.

    The Water issue

    This story was part of our January 2022 issue

    Crise climática gera eco-ansiedade em jovens temerosos pelo futuro do planeta (Folha de S.Paulo)

    www1.folha.uol.com.br

    Isabella Menon – 9 de janeiro de 2022

    Para especialista, fenômeno precisa ser visto com cautela para que medo não se transforme em negacionismo

    Enquanto conversava com a reportagem por telefone, o advogado Leandro Luz, 29, confessa que está nervoso. A angústia em sua fala se refere ao tema da conversa que envolve um de seus maiores medos: a crise climática.

    Ler, ouvir e falar sobre aumento da temperatura na Terra, queimadas na Amazônia, derretimento de geleiras e desastres ambientais cada vez mais frequentes deixam Luz nervoso. Quando se depara com o tema, ele sente taquicardia e suor frio nas palmas das mãos e costas.

    Até pouco tempo, ele não entendia bem o que sentia, até que descobriu sofrer da chamada eco-ansiedade. O termo, que aparece em um relatório divulgado pela Associação Americana de Psicologia em 2017 e foi incluído no dicionário Oxford no final de outubro de 2021, é descrito como um medo crônico sobre a destruição ambiental acompanhado do sentimento de culpa por contribuições individuais e o impacto disso nas gerações futuras.

    A primeira vez que Luz prestou atenção às questões climática foi após o tsunami em Fukushima, no Japão, quando ondas gigantes mataram 18 mil pessoas. Hoje, ele vive em Salvador, mas conta que pensa em se mudar para o interior. “Converso com a minha namorada de morar longe da costa, mas sei que esses locais também serão afetados”, diz ele que relata viver em um grande dilema.

    “Não sei como me comportar nos próximos 30 anos, procuro evitar o consumo desenfreado e evito produzir muito lixo plástico, mas sei que são atitudes muito pontuais que, a grosso modo, não vão mudar a realidade”.

    O advogado, porém, também critica o governo sobre sua postura diante da crise climática. Para ele, por exemplo, a prioridade de autoridades deveria estar na mudança da matriz energética brasileira. “Mas, estamos no caminho oposto, voltamos a discutir a implementação de usinas de carvão para produção de energia no Brasil, algo que é totalmente rudimentar”.

    Assim como Leandro Luz, a aluna do ensino médio Mariana dos Santos, 16, se recorda de chorar copiosamente quando criança após assistir a reportagens sobre mudança climática. Hoje, ela diz que apesar de não desabar mais diante das notícias, a ansiedade vira e mexe ainda a abala.

    Ela costuma temer, por exemplo, o aumento do nível da água dos oceanos. “Penso nas cidades que podem desaparecer e as consequências que isso pode acarretar. Isso se torna uma bola de neve. Sei que não dá para fazer muito e é isso que desencadeia o desespero”, diz.

    A estudante de gestão ambiental Maria Antônia Luna, 20, também descobriu recentemente que o aperto no peito, sensação de falta de ar ao ler notícias sobre o incêndio que atingiu o Pantanal em 2020 se referem à eco-ansiedade.

    “A sensação é de uma angústia de que nada vai melhorar”, define ela que agora busca uma terapia que a ajude a enfrentar aflições relacionadas às crises climáticas, tópico frequente em sua graduação.

    Marina, Maria e Leandro não são casos isolados. Um estudo publicado no The Lancet Planetary Health, no início de setembro, analisou a ansiedade climática entre jovens de dez países, como Brasil, Estados Unidos, Índia, Filipinas, Finlândia e França.

    O artigo, em preprint (não revisado por pares), ouviu 10 mil jovens de 16 a 25 anos e apontou que a maioria sente com medo, raiva, tristeza, desespero, culpa e vergonha diante de problemas ecológicos.

    Ao todo, 58% consideram que seus governos traíram os jovens e as gerações futuras. Apenas franceses e finlandeses não concordam majoritariamente com a afirmação. Quando os números são destrinchados por países, a sensação de traição tanto por parte dos adultos quanto dos governantes é mais latente entre os brasileiros (77%), seguido por indianos (66%).

    Para Alexandre Araújo Costa, físico e pesquisador de crises climáticas há 20 anos, a pesquisa aponta também para um olhar otimista, ou seja, o potencial de conscientização maior entre os mais jovens.

    “Eles sentem que o Brasil não faz nada para evitar a atual situação e isso pode ser bom para mobilizar”, diz Costa. Segundo ele, não é possível hoje evitar que o assunto seja debatido. “A consequência relacionada à saúde mental é preocupante, mas não podemos manter nossas crianças e jovens em uma redoma dizendo que está tudo bem, quando corremos o risco de perder a Amazônia”, afirma.

    O professor ainda analisa que a situação não deve ser vista apenas como um sofrimento individual, já que todos vão acabar impactados de uma forma com a crise ambiental. “É preciso que a gente troque esse governo que dá de ombros para o problema ou que é sequestrado por interesses econômicos que só visam lucro de curto prazo”, diz.

    A bióloga Beatriz Ramos segue a linha de Costa. Para ela, o perigo da eco-ansiedade é a vontade de não saber o que está acontecendo. “Ao nos afastarmos dos fatos, podemos entrar em um processo de negação.’”

    “É preciso falar o que vai acontecer, como podemos prevenir, quais são as possíveis soluções e explicar que vai acontecer um aumento de eventos extremos, mas existem formas de nos adaptarmos e ainda temos tempo de mitigar isso. Não dá para agir só com otimismo ou só com a sensação apocalíptica”, diz.

    Depois de uma depressão profunda disparada pelo sentimento de degradação ambiental, a ecóloga Ana Lúcia Tourinho entendeu que a única forma de me sentir melhor seria se seguisse atuando na linha de frente. Esse foi um dos motivos que a levou a trabalhar em Sinop (MT), região que sofreu com queimadas e densas névoas de fumaça em 2020.

    “Eu respiro fumaça de incêndio. É triste, mas é uma forma que encontrei de não me esconder. A sensação de impotência diminui, sinto que não estou parada assistindo à destruição”, diz ela que relata que nos piores momentos do ano passado presenciou cenas desesperadoras de animais agonizando vivos.

    A angústia diante às crises climáticas parece cada vez mais latente e atinge, principalmente, os mais jovens. Em Portugal, de acordo com uma reportagem publicada pela Agência Lusa, o termo traz um novo desafio aos psicólogos. Já no Brasil, o uso do termo ainda é emergente, apontam especialistas.

    O antropólogo Rodrigo Toniol, por exemplo, não acredita que esse diagnóstico vá emplacar. “Não acho que a gente vai chegar num consultório e será um diagnóstico à mão de todos os psiquiatras, mas eu acho que esse é um sintoma relevante que aponta para problemas ligados à falta de um pacto social”, diz ele.

    Para o psicanalista e professor do Instituto de Psicologia da USP Christian Dunker diz que os efeitos da ansiedade causada pelo clima são colaterais. Dunker reflete que, na verdade, nota no consultório o crescente sentimento de injustiça quanto às situações que demandariam ações que não são sendo tomadas, como desigualdade social, racismo, homofobia e desigualdade de gênero.

    “No bojo desta modificação da nossa indignação aparece a situação em que passamos a enxergar o planeta como alguém e não como algo”, analisa.

    Can you think yourself young? (The Guardian)

    theguardian.com

    David Robson, Sun 2 Jan 2022 12.00 GMT

    Illustration by Observer design/Getty/Freepik.

    Research shows that a positive attitude to ageing can lead to a longer, healthier life, while negative beliefs can have hugely detrimental effects

    For more than a decade, Paddy Jones has been wowing audiences across the world with her salsa dancing. She came to fame on the Spanish talent show Tú Sí Que Vales (You’re Worth It) in 2009 and has since found success in the UK, through Britain’s Got Talent; in Germany, on Das Supertalent; in Argentina, on the dancing show Bailando; and in Italy, where she performed at the Sanremo music festival in 2018 alongside the band Lo Stato Sociale.

    Jones also happens to be in her mid-80s, making her the world’s oldest acrobatic salsa dancer, according to Guinness World Records. Growing up in the UK, Jones had been a keen dancer and had performed professionally before she married her husband, David, at 22 and had four children. It was only in retirement that she began dancing again – to widespread acclaim. “I don’t plead my age because I don’t feel 80 or act it,” Jones told an interviewer in 2014.

    According to a wealth of research that now spans five decades, we would all do well to embrace the same attitude – since it can act as a potent elixir of life. People who see the ageing process as a potential for personal growth tend to enjoy much better health into their 70s, 80s and 90s than people who associate ageing with helplessness and decline, differences that are reflected in their cells’ biological ageing and their overall life span.

    Salsa dancer Paddy Jones, centre.
    Salsa dancer Paddy Jones, centre. Photograph: Alberto Teren

    Of all the claims I have investigated for my new book on the mind-body connection, the idea that our thoughts could shape our ageing and longevity was by far the most surprising. The science, however, turns out to be incredibly robust. “There’s just such a solid base of literature now,” says Prof Allyson Brothers at Colorado State University. “There are different labs in different countries using different measurements and different statistical approaches and yet the answer is always the same.”

    If I could turn back time

    The first hints that our thoughts and expectations could either accelerate or decelerate the ageing process came from a remarkable experiment by the psychologist Ellen Langer at Harvard University.

    In 1979, she asked a group of 70- and 80-year-olds to complete various cognitive and physical tests, before taking them to a week-long retreat at a nearby monastery that had been redecorated in the style of the late 1950s. Everything at the location, from the magazines in the living room to the music playing on the radio and the films available to watch, was carefully chosen for historical accuracy.

    The researchers asked the participants to live as if it were 1959. They had to write a biography of themselves for that era in the present tense and they were told to act as independently as possible. (They were discouraged from asking for help to carry their belongings to their room, for example.) The researchers also organised twice-daily discussions in which the participants had to talk about the political and sporting events of 1959 as if they were currently in progress – without talking about events since that point. The aim was to evoke their younger selves through all these associations.

    To create a comparison, the researchers ran a second retreat a week later with a new set of participants. While factors such as the decor, diet and social contact remained the same, these participants were asked to reminisce about the past, without overtly acting as if they were reliving that period.

    Most of the participants showed some improvements from the baseline tests to the after-retreat ones, but it was those in the first group, who had more fully immersed themselves in the world of 1959, who saw the greatest benefits. Sixty-three per cent made a significant gain on the cognitive tests, for example, compared to just 44% in the control condition. Their vision became sharper, their joints more flexible and their hands more dextrous, as some of the inflammation from their arthritis receded.

    As enticing as these findings might seem, Langer’s was based on a very small sample size. Extraordinary claims need extraordinary evidence and the idea that our mindset could somehow influence our physical ageing is about as extraordinary as scientific theories come.

    Becca Levy, at the Yale School of Public Health, has been leading the way to provide that proof. In one of her earliest – and most eye-catching – papers, she examined data from the Ohio Longitudinal Study of Aging and Retirement that examined more than 1,000 participants since 1975.

    The participants’ average age at the start of the survey was 63 years old and soon after joining they were asked to give their views on ageing. For example, they were asked to rate their agreement with the statement: “As you get older, you are less useful”. Quite astonishingly, Levy found the average person with a more positive attitude lived on for 22.6 years after the study commenced, while the average person with poorer interpretations of ageing survived for just 15 years. That link remained even after Levy had controlled for their actual health status at the start of the survey, as well as other known risk factors, such as socioeconomic status or feelings of loneliness, which could influence longevity.

    The implications of the finding are as remarkable today as they were in 2002, when the study was first published. “If a previously unidentified virus was found to diminish life expectancy by over seven years, considerable effort would probably be devoted to identifying the cause and implementing a remedy,” Levy and her colleagues wrote. “In the present case, one of the likely causes is known: societally sanctioned denigration of the aged.”

    Later studies have since reinforced the link between people’s expectations and their physical ageing, while dismissing some of the more obvious – and less interesting – explanations. You might expect that people’s attitudes would reflect their decline rather than contribute to the degeneration, for example. Yet many people will endorse certain ageist beliefs, such as the idea that “old people are helpless”, long before they should have started experiencing age-related disability themselves. And Levy has found that those kinds of views, expressed in people’s mid-30s, can predict their subsequent risk of cardiovascular disease up to 38 years later.

    The most recent findings suggest that age beliefs may play a key role in the development of Alzheimer’s disease. Tracking 4,765 participants over four years, the researchers found that positive expectations of ageing halved the risk of developing the disease, compared to those who saw old age as an inevitable period of decline. Astonishingly, this was even true of people who carried a harmful variant of the APOE gene, which is known to render people more susceptible to the disease. The positive mindset can counteract an inherited misfortune, protecting against the build-up of the toxic plaques and neuronal loss that characterise the disease.

    How could this be?

    Behaviour is undoubtedly important. If you associate age with frailty and disability, you may be less likely to exercise as you get older and that lack of activity is certainly going to increase your predisposition to many illnesses, including heart disease and Alzheimer’s.

    Importantly, however, our age beliefs can also have a direct effect on our physiology. Elderly people who have been primed with negative age stereotypes tend to have higher systolic blood pressure in response to challenges, while those who have seen positive stereotypes demonstrate a more muted reaction. This makes sense: if you believe that you are frail and helpless, small difficulties will start to feel more threatening. Over the long term, this heightened stress response increases levels of the hormone cortisol and bodily inflammation, which could both raise the risk of ill health.

    The consequences can even be seen within the nuclei of the individual cells, where our genetic blueprint is stored. Our genes are wrapped tightly in each cell’s chromosomes, which have tiny protective caps, called telomeres, which keep the DNA stable and stop it from becoming frayed and damaged. Telomeres tend to shorten as we age and this reduces their protective abilities and can cause the cell to malfunction. In people with negative age beliefs, that process seems to be accelerated – their cells look biologically older. In those with the positive attitudes, it is much slower – their cells look younger.

    For many scientists, the link between age beliefs and long-term health and longevity is practically beyond doubt. “It’s now very well established,” says Dr David Weiss, who studies the psychology of ageing at Martin-Luther University of Halle-Wittenberg in Germany. And it has critical implications for people of all generations.

    Birthday cards sent to Captain Tom Moore for his 100th birthday
    Birthday cards sent to Captain Tom Moore for his 100th birthday – many cards for older people have a less respectful tone. Photograph: Shaun Botterill/Getty Images

    Our culture is saturated with messages that reinforce the damaging age beliefs. Just consider greetings cards, which commonly play on of images depicting confused and forgetful older people. “The other day, I went to buy a happy 70th birthday card for a friend and I couldn’t find a single one that wasn’t a joke,” says Martha Boudreau, the chief communications officer of AARP, a special interest group (formerly known as the American Association of Retired Persons) that focuses on the issues of over-50s.

    She would like to see greater awareness – and intolerance – of age stereotypes, in much the same way that people now show greater sensitivity to sexism and racism. “Celebrities, thought leaders and influencers need to step forward,” says Boudreau.

    In the meantime, we can try to rethink our perceptions of our own ageing. Various studies show that our mindsets are malleable. By learning to reject fatalistic beliefs and appreciate some of the positive changes that come with age, we may avoid the amplified stress responses that arise from exposure to negative stereotypes and we may be more motivated to exercise our bodies and minds and to embrace new challenges.

    We could all, in other words, learn to live like Paddy Jones.

    When I interviewed Jones, she was careful to emphasise the potential role of luck in her good health. But she agrees that many people have needlessly pessimistic views of their capabilities, over what could be their golden years, and encourages them to question the supposed limits. “If you feel there’s something you want to do, and it inspires you, try it!” she told me. “And if you find you can’t do it, then look for something else you can achieve.”

    Whatever our current age, that’s surely a winning attitude that will set us up for greater health and happiness for decades to come.

    This is an edited extract fromThe Expectation Effect: How your Mindset Can Transform Your Life by David Robson, published by Canongate on 6 January (£18.99).

    How to think about weird things (AEON)

    From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer

    by Stephen Law

    Stephen Law is a philosopher and author. He is director of philosophy at the Department of Continuing Education at the University of Oxford, and editor of Think, the Royal Institute of Philosophy journal. He researches primarily in the fields of philosophy of religion, philosophy of mind, Ludwig Wittgenstein, and essentialism. His books for a popular audience include The Philosophy Gym (2003), The Complete Philosophy Files (2000) and Believing Bullshit (2011). He lives in Oxford.

    Edited by Nigel Warburton

    10 NOVEMBER 2021

    Many people believe in extraordinary hidden beings, including demons, angels, spirits and gods. Plenty also believe in supernatural powers, including psychic abilities, faith healing and communication with the dead. Conspiracy theories are also popular, including that the Holocaust never happened and that the terrorist attacks on the United States of 11 September 2001 were an inside job. And, of course, many trust in alternative medicines such as homeopathy, the effectiveness of which seems to run contrary to our scientific understanding of how the world actually works.

    Such beliefs are widely considered to be at the ‘weird’ end of the spectrum. But, of course, just because a belief involves something weird doesn’t mean it’s not true. As science keeps reminding us, reality often is weird. Quantum mechanics and black holes are very weird indeed. So, while ghosts might be weird, that’s no reason to dismiss belief in them out of hand.

    I focus here on a particular kind of ‘weird’ belief: not only are these beliefs that concern the enticingly odd, they’re also beliefs that the general public finds particularly difficult to assess.

    Almost everyone agrees that, when it comes to black holes, scientists are the relevant experts, and scientific investigation is the right way to go about establishing whether or not they exist. However, when it comes to ghosts, psychic powers or conspiracy theories, we often hold wildly divergent views not only about how reasonable such beliefs are, but also about what might count as strong evidence for or against them, and who the relevant authorities are.

    Take homeopathy, for example. Is it reasonable to focus only on what scientists have to say? Shouldn’t we give at least as much weight to the testimony of the many people who claim to have benefitted from homeopathic treatment? While most scientists are sceptical about psychic abilities, what of the thousands of reports from people who claim to have received insights from psychics who could only have known what they did if they really do have some sort of psychic gift? To what extent can we even trust the supposed scientific ‘experts’? Might not the scientific community itself be part of a conspiracy to hide the truth about Area 51 in Nevada, Earth’s flatness or the 9/11 terrorist attacks being an inside job?

    Most of us really struggle when it comes to assessing such ‘weird’ beliefs – myself included. Of course, we have our hunches about what’s most likely to be true. But when it comes to pinning down precisely why such beliefs are or aren’t reasonable, even the most intelligent and well educated of us can quickly find ourselves out of our depth. For example, while most would pooh-pooh belief in fairies, Arthur Conan Doyle, the creator of the quintessentially rational detective Sherlock Holmes, actually believed in them and wrote a book presenting what he thought was compelling evidence for their existence.

    When it comes to weird beliefs, it’s important we avoid being closed-minded naysayers with our fingers in our ears, but it’s also crucial that we avoid being credulous fools. We want, as far as possible, to be reasonable.

    I’m a philosopher who has spent a great deal of time thinking about the reasonableness of such ‘weird’ beliefs. Here I present five key pieces of advice that I hope will help you figure out for yourself what is and isn’t reasonable.

    Let’s begin with an illustration of the kind of case that can so spectacularly divide opinion. In 1976, six workers reported a UFO over the site of a nuclear plant being constructed near the town of Apex, North Carolina. A security guard then reported a ‘strange object’. The police officer Ross Denson drove over to investigate and saw what he described as something ‘half the size of the Moon’ hanging over the plant. The police also took a call from local air traffic control about an unidentified blip on their radar.

    The next night, the UFO appeared again. The deputy sheriff described ‘a large lighted object’. An auxiliary officer reported five lighted objects that appeared to be burning and about 20 times the size of a passing plane. The county magistrate described a rectangular football-field-sized object that looked like it was on fire.

    Finally, the press got interested. Reporters from the Star newspaper drove over to investigate. They too saw the UFO. But when they tried to drive nearer, they discovered that, weirdly, no matter how fast they drove, they couldn’t get any closer.

    This report, drawn from Philip J Klass’s book UFOs: The Public Deceived (1983), is impressive: it involves multiple eyewitnesses, including police officers, journalists and even a magistrate. Their testimony is even backed up by hard evidence – that radar blip.

    Surely, many would say, given all this evidence, it’s reasonable to believe there was at least something extraordinary floating over the site. Anyone who failed to believe at least that much would be excessively sceptical – one of those perpetual naysayers whose kneejerk reaction, no matter how strong the evidence, is always to pooh-pooh.

    What’s most likely to be true: that there really was something extraordinary hanging over the power plant, or that the various eyewitnesses had somehow been deceived? Before we answer, here’s my first piece of advice.NEED TO KNOWTHINK IT THROUGHKEY POINTSWHY IT MATTERSLINKS & BOOKS

    Think it through

    1. Expect unexplained false sightings and huge coincidences

    Our UFO story isn’t over yet. When the Star’s two-man investigative team couldn’t get any closer to the mysterious object, they eventually pulled over. The photographer took out his long lens to take a look: ‘Yep … that’s the planet Venus all right.’ It was later confirmed beyond any reasonable doubt that what all the witnesses had seen was just a planet. But what about that radar blip? It was a coincidence, perhaps caused by a flock of birds or unusual weather.

    What moral should we draw from this case? Not, of course, that because this UFO report turned out to have a mundane explanation, all such reports can be similarly dismissed. But notice that, had the reporters not discovered the truth, this story would likely have gone down in the annals of ufology as one of the great unexplained cases. The moral I draw is that UFO cases that have multiple eyewitnesses and even independent hard evidence (the radar blip) may well crop up occasionally anyway, even if there are no alien craft in our skies.

    We tend significantly to underestimate how prone to illusion and deception we are when it comes to the wacky and weird. In particular, we have a strong tendency to overdetect agency – to think we are witnessing a person, an alien or some other sort of creature or being – where in truth there’s none.

    Psychologists have developed theories to account for this tendency to overdetect agency, including that we have evolved what’s called a hyperactive agency detecting device. Had our ancestors missed an agent – a sabre-toothed tiger or a rival, say – that might well have reduced their chances of surviving and reproducing. Believing an agent is present when it’s not, on the other hand, is likely to be far less costly. Consequently, we’ve evolved to err on the side of overdetection – often seeing agency where there is none. For example, when we observe a movement or pattern we can’t understand, such as the retrograde motion of a planet in the night sky, we’re likely to think the movement is explained by some hidden agent working behind the scenes (that Mars is actually a god, say).

    One example of our tendency to overdetect agency is pareidolia: our tendency to find patterns – and, in particular, faces – in random noise. Stare at passing clouds or into the embers of a fire, and it’s easy to interpret the randomly generated shapes we see as faces, often spooky ones, staring back.

    And, of course, nature is occasionally going to throw up the face-like patterns just by chance. One famous illustration was produced in 1976 by the Mars probe Viking Orbiter 1. As the probe passed over the Cydonia region, it photographed what appeared to be an enormous, reptilian-looking face 800 feet high and nearly 2 miles long. Some believe this ‘face on Mars’ was a relic of an ancient Martian civilisation, a bit like the Great Sphinx of Giza in Egypt. A book called The Monuments of Mars: A City on the Edge of Forever (1987) even speculated about this lost civilisation. However, later photos revealed the ‘face’ to be just a hill that looks face-like when lit a certain way. Take enough photos of Mars, and some will reveal face-like features just by chance.

    The fact is, we should expect huge coincidences. Millions of pieces of bread are toasted each morning. One or two will exhibit face-like patterns just by chance, even without divine intervention. One such piece of toast that was said to show the face of the Virgin Mary (how do we know what she looked like?) was sold for $28,000. We think about so many people each day that eventually we’ll think about someone, the phone will ring, and it will be them. That’s to be expected, even if we’re not psychic. Yet many put down such coincidences to supernatural powers.

    2. Understand what strong evidence actually is

    When is a claim strongly confirmed by a piece of evidence? The following principle appears correct (it captures part of what confirmation theorists call the Bayes factor; for more on Bayesian approaches to assessing evidence, see the link at the end):

    Evidence confirms a claim to the extent that the evidence is more likely if the claim is true than if it’s false.

    Here’s a simple illustration. Suppose I’m in the basement and can’t see outside. Jane walks in with a wet coat and umbrella and tells me it’s raining. That’s pretty strong evidence it’s raining. Why? Well, it is of course possible that Jane is playing a prank on me with her wet coat and brolly. But it’s far more likely she would appear with a wet coat and umbrella and tell me it’s raining if that’s true than if it’s false. In fact, given just this new evidence, it may well be reasonable for me to believe it’s raining.

    Here’s another example. Sometimes whales and dolphins are found with atavistic limbs – leg-like structures – where legs would be found on land mammals. These discoveries strongly confirm the theory that whales and dolphins evolved from earlier limbed, land-dwelling species. Why? Because, while atavistic limbs aren’t probable given the truth of that theory, they’re still far more probable than they would be if whales and dolphins weren’t the descendants of such limbed creatures.

    The Mars face, on the other hand, provides an example of weak or non-existent evidence. Yes, if there was an ancient Martian civilisation, then we might discover what appeared to be a huge face built on the surface of the planet. However, given pareidolia and the likelihood of face-like features being thrown up by chance, it’s about as likely that we would find such face-like features anyway, even if there were no alien civilisation. That’s why such features fail to provide strong evidence for such a civilisation.

    So now consider our report of the UFO hanging over the nuclear power construction site. Are several such cases involving multiple witnesses and backed up by some hard evidence (eg, a radar blip) good evidence that there are alien craft in our skies? No. We should expect such hard-to-explain reports anyway, whether or not we’re visited by aliens. In which case, such reports are not strong evidence of alien visitors.

    Being sceptical about such reports of alien craft, ghosts or fairies is not knee-jerk, fingers-in-our-ears naysaying. It’s just recognising that, though we might not be able to explain the reports, they’re likely to crop up occasionally anyway, whether or not alien visitors, ghosts or fairies actually exist. Consequently, they fail to provide strong evidence for such beings.

    3. Extraordinary claims require extraordinary evidence

    It was the scientist Carl Sagan who in 1980 said: ‘Extraordinary claims require extraordinary evidence.’ By an ‘extraordinary’ claim, Sagan appears to have meant an extraordinarily improbable claim, such as that Alice can fly by flapping her arms, or that she can move objects with her mind. On Sagan’s view, such claims require extraordinarily strong evidence before we should accept them – much stronger than the evidence required to support a far less improbable claim.

    Suppose for example that Fred claims Alice visited him last night, sat on his sofa and drank a cup of tea. Ordinarily, we would just take Fred’s word for that. But suppose Fred adds that, during her visit, Alice flew around the room by flapping her arms. Of course, we’re not going to just take Fred’s word for that. It’s an extraordinary claim requiring extraordinary evidence.

    If we’re starting from a very low base, probability-wise, then much more heavy lifting needs to be done by the evidence to raise the probability of the claim to a point where it might be reasonable to believe it. Clearly, Fred’s testimony about Alice flying around the room is not nearly strong enough.

    Similarly, given the low prior probability of the claims that someone communicated with a dead relative, or has fairies living in their local wood, or has miraculously raised someone from the dead, or can move physical objects with their mind, we should similarly set the evidential bar much higher than we would for more mundane claims.

    4. Beware accumulated anecdotes

    Once we’ve formed an opinion, it can be tempting to notice only evidence that supports it and to ignore the rest. Psychologists call this tendency confirmation bias.

    For example, suppose Simon claims a psychic ability to know the future. He can provide 100 examples of his predictions coming true, including one or two dramatic examples. In fact, Simon once predicted that a certain celebrity would die within 12 months, and they did!

    Do these 100 examples provide us with strong evidence that Simon really does have some sort of psychic ability? Not if Simon actually made many thousands of predictions and most didn’t come true. Still, if we count only Simon’s ‘hits’ and ignore his ‘misses’, it’s easy to create the impression that he has some sort of ‘gift’.

    Confirmation bias can also create the false impression that a therapy is effective. A long list of anecdotes about patients whose condition improved after a faith healing session can seem impressive. People may say: ‘Look at all this evidence! Clearly this therapy has some benefits!’ But the truth is that such accumulated anecdotes are usually largely worthless as evidence.

    It’s also worth remembering that such stories are in any case often dubious. For example, they can be generated by the power of suggestion: tell people that a treatment will improve their condition, and many will report that it has, even if the treatment actually offers no genuine medical benefit.

    Impressive anecdotes can also be generated by means of a little creative interpretation. Many believe that the 16th-century seer Nostradamus predicted many important historical events, from the Great Fire of London to the assassination of John F Kennedy. However, because Nostradamus’s prophecies are so vague, nobody was able to use his writings to predict any of these events before they occurred. Rather, his texts were later creatively interpreted to fit what subsequently happened. But that sort of ‘fit’ can be achieved whether Nostradamus had extraordinary abilities or not. In which case, as we saw under point 2 above, the ‘fit’ is not strong evidence of such abilities.

    5. Beware ‘But it fits!’

    Often, when we’re presented with strong evidence that our belief is false, we can easily change our mind. Show me I’m mistaken in believing that the Matterhorn is near Chamonix, and I’ll just drop that belief.

    However, abandoning a belief isn’t always so easy. That’s particularly the case for beliefs in which we have invested a great deal emotionally, socially and/or financially. When it comes to religious and political beliefs, for example, or beliefs about the character of our close relatives, we can find it extraordinarily difficult to change our minds. Psychologists refer to the discomfort we feel in such situations – when our beliefs or attitudes are in conflict – as cognitive dissonance.

    Perhaps the most obvious strategy we can employ when a belief in which we have invested a great deal is threatened is to start explaining away the evidence.

    Here’s an example. Dave believes dogs are spies from the planet Venus – that dogs are Venusian imposters on Earth sending secret reports back to Venus in preparation for their imminent invasion of our planet. Dave’s friends present him with a great deal of evidence that he’s mistaken. But, given a little ingenuity, Dave finds he can always explain away that evidence:

    ‘Dave, dogs can’t even speak – how can they communicate with Venus?’

    ‘They can speak, they just hide their linguistic ability from us.’

    ‘But Dave, dogs don’t have transmitters by which they could relay their messages to Venus – we’ve searched their baskets: nothing there!’

    ‘Their transmitters are hidden in their brain!’

    ‘But we’ve X-rayed this dog’s brain – no transmitter!’

    ‘The transmitters are made from organic material indistinguishable from ordinary brain stuff.’

    ‘But we can’t detect any signals coming from dogs’ heads.’

    ‘This is advanced alien technology – beyond our ability to detect it!’

    ‘Look Dave, Venus can’t support dog life – it’s incredibly hot and swathed in clouds of acid.’

    ‘The dogs live in deep underground bunkers to protect them. Why do you think they want to leave Venus?!’

    You can see how this conversation might continue ad infinitum. No matter how much evidence is presented to Dave, it’s always possible for him to cook up another explanation. And so he can continue to insist his belief is logically consistent with the evidence.

    But, of course, despite the possibility of his endlessly explaining away any and all counterevidence, Dave’s belief is absurd. It’s certainly not confirmed by the available evidence about dogs. In fact, it’s powerfully disconfirmed.

    The moral is: showing that your theory can be made to ‘fit’ – be consistent with – the evidence is not the same thing as showing your theory is confirmed by the evidence. However, those who hold weird beliefs often muddle consistency and confirmation.

    Take young-Earth creationists, for example. They believe in the literal truth of the Biblical account of creation: that the entire Universe is under 10,000 years old, with all species being created as described in the Book of Genesis.

    Polls indicate that a third or more of US citizens believe that the Universe is less than 10,000 years old. Of course, there’s a mountain of evidence against the belief. However, its proponents are adept at explaining away that evidence.

    Take the fossil record embedded in sedimentary layers revealing that today’s species evolved from earlier species over many millions of years. Many young-Earth creationists explain away this record as a result of the Biblical flood, which they suppose drowned and then buried living things in huge mud deposits. The particular ordering of the fossils is supposedly accounted for by different ecological zones being submerged one after the other, starting with simple marine life. Take a look at the Answers in Genesis website developed by the Bible literalist Ken Ham, and you’ll discover how a great deal of other evidence for evolution and a billions-of-years-old Universe is similarly explained away. Ham believes that, by explaining away the evidence against young-Earth creationism in this way, he can show that his theory ‘fits’ – and so is scientifically confirmed by – that evidence:

    Increasing numbers of scientists are realising that when you take the Bible as your basis and build your models of science and history upon it, all the evidence from the living animals and plants, the fossils, and the cultures fits. This confirms that the Bible really is the Word of God and can be trusted totally.
    [my italics]

    According to Ham, young-Earth creationists and evolutionists do the same thing: they look for ways to make the evidence fit the theory to which they have already committed themselves:

    Evolutionists have their own framework … into which they try to fit the data.
    [my italics]

    But, of course, scientists haven’t just found ways of showing how the theory of evolution can be made consistent with the evidence. As we saw above, that theory really is strongly confirmed by the evidence.

    Any theory, no matter how absurd, can, with sufficient ingenuity be made to ‘fit’ the evidence: even Dave’s theory that dogs are Venusian spies. That’s not to say it’s reasonable or well confirmed.

    Of course, it’s not always unreasonable to explain away evidence. Given overwhelming evidence that water boils at 100 degrees Celsius at 1 atmosphere, a single experiment that appeared to contradict that claim might reasonably be explained away as a result of some unidentified experimental error. But as we increasingly come to rely on explaining away evidence in order to try to convince ourselves of the reasonableness of our belief, we begin to drift into delusion.

    Key points – How to think about weird things

    1. Expect unexplained false sightings and huge coincidences. Reports of mysterious and extraordinary hidden agents – such as angels, demons, spirits and gods – are to be expected, whether or not such beings exist. Huge coincidences – such as a piece of toast looking very face-like – are also more or less inevitable.
    2. Understand what strong evidence is. If the alleged evidence for a belief is scarcely more likely if the belief is true than if it’s false, then it’s not strong evidence.
    3. Extraordinary claims require extraordinary evidence. If a claim is extraordinarily improbable – eg, the claim that Alice flew round the room by flapping her arms – much stronger evidence is required for reasonable belief than is required for belief in a more mundane claim, such as that Alice drank a cup of tea.
    4. Beware accumulated anecdotes. A large number of reports of, say, people recovering after taking an alternative medicine or visiting a faith healer is not strong evidence that such treatments actually work.
    5. Beware ‘But it fits!’ Any theory, no matter how ludicrous (even the theory that dogs are spies from Venus), can, with sufficient ingenuity, always be made logically consistent with the evidence. That’s not to say it’s confirmed by the evidence.

    Why it matters

    Sometimes, belief in weird things is pretty harmless. What does it matter if Mary believes there are fairies at the bottom of her garden, or Joe thinks his dead aunty visits him occasionally? What does it matter if Sally is a closed-minded naysayer when it comes to belief in psychic powers? However, many of these beliefs have serious consequences.

    Clearly, people can be exploited. Grieving parents contact spiritualists who offer to put them in contact with their dead children. Peddlers of alternative medicine and faith healing charge exorbitant fees for their ‘cures’ for terminal illnesses. If some alternative medicines really work, casually dismissing them out of hand and refusing to properly consider the evidence could also cost lives.

    Lives have certainly been lost. Many have died who might have been saved because they believed they should reject conventional medicine and opted for ineffective alternatives.

    Huge amounts of money are often also at stake when it comes to weird beliefs. Psychic reading and astrology are huge businesses with turnovers of billions of dollars per year. Often, it’s the most desperate who will turn to such businesses for advice. Are they, in reality, throwing their money away?

    Many ‘weird’ beliefs also have huge social and political implications. The former US president Ronald Reagan and his wife Nancy were reported to have consulted an astrologer before making any major political decision. Conspiracy theories such as QAnon and the Sandy Hook hoax shape our current political landscape and feed extremist political thinking. Mainstream religions are often committed to miracles and gods.

    In short, when it comes to belief in weird things, the stakes can be very high indeed. It matters that we don’t delude ourselves into thinking we’re being reasonable when we’re not.

    The Atlantic article ‘The Cognitive Biases Tricking Your Brain’ (2018) by Ben Yagoda provides a great introduction to thinking that can lead us astray, including confirmation bias.

    The UK-based magazine The Skeptic provides some high-quality free articles on belief in weird things. Well worth a subscription.

    The Skeptical Inquirer magazine in the US is also excellent, and provides some free content.

    The RationalWiki portal provides many excellent articles on pseudoscience.

    The British mathematician Norman Fenton, professor of risk information management at Queen Mary University of London, provides a brief online introduction to Bayesian approaches to assessing evidence.

    My book Believing Bullshit: How Not to Get Sucked into an Intellectual Black Hole (2011) identifies eight tricks of the trade that can turn flaky ideas into psychological flytraps – and how to avoid them.

    The textbook How to Think About Weird Things: Critical Thinking for a New Age (2019, 8th ed) by the philosophers Theodore Schick and Lewis Vaughn, offers step-by-step advice on sorting through reasons, evaluating evidence and judging the veracity of a claim.

    The book Critical Thinking (2017) by Tom Chatfield offers a toolkit for what he calls ‘being reasonable in an unreasonable world’.

    A theory of my own mind (AEON)

    Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people

    https://pbs.twimg.com/media/D9xE74lW4AEArgC.jpg:large
    Tokyo, 1996. Photo by Harry Gruyaert/Magnum

    Stephen M Fleming is professor of cognitive neuroscience at University College London, where he leads the Metacognition Group. He is author of Know Thyself: The Science of Self-awareness (2021). Edited by Pam Weintraub

    23 September 2021

    In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?

    In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’

    Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:

    Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?

    Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.

    Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.

    Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.

    Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.

    A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.

    This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’

    An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.

    The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’

    Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:

    Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.

    But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’

    Self-awareness is something that can be cultivated

    Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.

    Adelson’s checkerboard. Courtesy Wikipedia

    Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.

    These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.

    Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.

    We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.

    This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.

    This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.

    Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?

    Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.

    A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.

    There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.

    Clear overlap between brain activations involved in metacognition and mindreading was observed

    So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.

    Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.

    At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.

    Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.

    Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recent studies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.

    Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.

    A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.

    A simple route for improving self-awareness is to take a third-person perspective on ourselves

    Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.

    There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:

    O wad some Power the giftie gie us
    To see oursels as ithers see us!
    It wad frae mony a blunder free us…

    (Oh, would some Power give us the gift
    To see ourselves as others see us!
    It would from many a blunder free us )

    Is everything in the world a little bit conscious? (MIT Technology Review)

    technologyreview.com

    Christof Koch – August 25, 2021

    The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

    Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

    As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

    IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

    A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

    IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

    Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

    Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

    Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

    The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

    Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

    Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

    Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

    Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

    In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

    This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

    Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

    Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

    The Mind issue

    This story was part of our September 2021 issue

    Racionalidade não é suficiente em situações de estresse, diz professor de negociação de Harvard (Folha de S.Paulo)

    Em livro, Daniel Shapiro apresenta método para solucionar conflitos

    15.ago.2021 às 16h00

    Fernanda Brigatti São Paulo

    As negociações e os conflitos que decorrem delas fazem parte da vida em sociedade e, ainda que a prática da negociação seja mais fortemente associada ao ambiente empresarial, ela está presente em delicadas questões geopolíticas, impasses orçamentários em empresas e mesmo em questões domésticas e relações familiares e corporativas.

    Para o psicólogo Daniel Shapiro, fundador do Programa Internacional de Negociação da Universidade de Harvard, nos Estados Unidos, o que essas situações têm em comum é a alta carga emocional, que despertam reações inconscientes que inviabilizam avanços.

    Polarizações, sejam políticas ou entre pessoas da mesma família, diz, são reações emocionais complicadas, mas nem por isso impossíveis. Em “Negociando o Inegociável”, lançado em julho no Brasil pela GloboLivros, Shapiro detalha o que pretende ser um manual para lidar com essas emoções e chegar a soluções.

    A experiência na mediação de conflitos –Shapiro trabalhou em negociações entre a China continental e Taiwan, em diferentes partes da África, no conflito israelo-palestino e na Europa central na virada do comunismo para o capitalismo– levou ao aprimoramento dessa metodologia que batizou de “teoria da identidade relacional”.

    Daniel Shapiro, fundador do Programa Internacional de Negociação da Universidade de Harvard – Divulgação

    Para ele, a noção de identidade é central nos conflitos contemporâneos. Uma vez ameaçada, ela expõe os envolvidos a iscas mentais –ou tentações– que mantêm o estado de conflito e impedem uma resolução. https://s.dynad.net/stack/928W5r5IndTfocT3VdUV-AB8UVlc0JbnGWyFZsei5gU.html%5B x ]

    Shapiro diz ter identificado as cinco principais tentações —vertigem, compulsão à repetição, tabus, ataque ao sagrado e política de identidade— a partir da observação de um exercício repetido por ele em diversos ambientes e com diferentes audiências.

    Na simulação, o grupo é estimulado a se agrupar em tribos com valores comuns —uma identidade é forjada entre eles. Organizados esses grupos, um alienígena invade a sala e faz uma única exigência: “escolham um único líder para representá-lo ou destruirei o mundo.” Invariavelmente, diz Shapiro, o mundo explode.

    A primeira dessas experiências foi no Estado da Macedônia, sob tensão e iminente conflito étnico, ainda nos anos 1990. Naquele que talvez seja o mais curioso, “o mundo explodiu em Davos” —45 líderes mundiais, entre políticos e grandes executivos, foram convidados a participar da dinâmica.

    A racionalidade, ou pressupor que os lados estejam sendo racionais no diálogo, segundo ele, não é suficiente. “Supomos que cada lado realmente tenha uma preocupação racional e use processos racionais para satisfazer essas preocupações. Quando se trata desses conflitos de grande carga emocional, não é suficiente.”

    Segundo Shapiro, é a identidade o elemento invisível que, em conflitos de grande carga emocional, acaba ativando o que ele chama de efeito das tribos, um estado mental de alerta e polarização.

    Com a pandemia, diz o psicólogo, estamos todos sob intenso estresse, o que torna as chances de as negociações terminarem em um acordo ainda menores. “É como reunir grupos de identidades distintas para negociar em uma sala escura e apertada.”

    Veja os principais trechos da entrevista concedida à Folha por videoconferência.

    Novo livro
    O trabalho que faço é em negociação e solução de conflitos. Estou nessa área há 30 e poucos anos —e minha calvície é a prova disso. Tenho observado como as pessoas negociam, o que funciona e o que não funciona nos mais significativos conflitos de nossas vidas, sejam profundamente políticos, ou desafios diários carregados de emoção.

    O que descobri é que as pessoas tendem a negociar em um nível irracional. Supomos que cada lado realmente tenha uma preocupação racional e use processos racionais para satisfazer essas preocupações. Quando se trata desses conflitos de grande carga emocional, a racionalidade não é suficiente.

    Como podemos entender e lidar com as dimensões mais profundas desses conflitos? Então, “Negociando o Inegociável” é uma forma de explorar essas dimensões do conflito que levam à polarização e como podemos entender mais profundamente esses conflitos –e que eu coloco na categoria da identidade.

    Existem tantos bons livros por aí que oferecem soluções rápidas, que oferecem respostas fáceis para lidar com problemas difíceis. Mas para realmente chegar à raiz dos problemas que enfrentamos em nossas sociedades e em nossas vidas, precisamos compreender profundamente essas dimensões. O papel da emoção, o papel da identidade e como eles nos afetam e como os afetamos para uma mudança construtiva.

    As cinco tentações
    Por que as sociedades e as famílias estão polarizadas? Mesmo quando se trata de um enorme custo emocional para pais, filhos, avós, por que fazemos isso? Essa é a questão sobre a qual venho pensando há muitos anos.

    Em “Negociando o Inegociável”, discuto os cinco principais instigadores da mentalidade da tribo, que chamo de tentações. Eles são os instigadores de conflitos. No momento em que nossa identidade parece ameaçada, essas tentações começam a nos tentar em direção à mente tribal.

    Desenvolvendo o método
    Assistindo àquele exercício da tribo, e o mundo explodindo de novo e de novo, me pergunto o que está acontecendo. Como essas pessoas, os líderes mais racionais do mundo, pessoas afetivas e amorosas estão explodindo o mundo de novo. Por quê? Cinco tentações, cinco iscas.

    Capa do livro Negociando o Inegociável, de Daniel Shapiro
    Capa do livro Negociando o Inegociável, de Daniel Shapiro – Divulgação/Globo Livros

    Vertigem
    Primeiro, entramos rapidamente no que chamo de vertigem. A ideia aqui é que, se entrarmos em um conflito com meu cônjuge, muito rapidamente podemos ser consumidos nesse conflito e uma discussão de dois minutos sobre quem deveria ter lavado a louça se transforma em duas horas de discussão horrível. Estamos em vertigem.

    As polarizações dos Estados Unidos estão indiscutivelmente em um lugar de vertigem, consumidos no conflito, pensando estritamente sobre isso não podemos ver o quadro geral, estamos presos em nosso próprio pequeno lugar. Estamos presos na vertigem.

    A palavra significa tontura, e essa é a experiência aqui. De repente, estou consumido pela vertigem, perco a noção de tempo e espaço e minha compreensão de tudo some. Parece sair daqui, um dos caminhos é perguntar “qual é o meu propósito nesse conflito?”.

    Compulsão à repetição
    Como seres humanos, tendemos a repetir os mesmos padrões de comportamento disfuncionais de novo e de novo e de novo. Seja no sistema familiar, todos podemos prever os conflitos que teremos no domingo à noite, com a esposa ou filhos. Sabemos o que cada pessoa vai dizer, sabemos o que elas vão fazer e como vamos nos sentir depois. É uma compulsão à repetição.

    E também somos muito bons em prever, por meio de intuição e palpite, em um nível nacional ou local, “opa, lá vamos nós de novo”. E esse é um daqueles momentos em que um grupo vai dizer isso, o outro vai dizer aquilo, e começarão apontar o dedo um para o outro e vai levar à violência.

    O problema como essa repetição é que esse padrão vira parte da nossa identidade e vira uma tatuagem. É muito difícil de se livrar disso, é mais que um simples hábito, é muito mais profundo. A saída para a compulsão à repetição é tomar ciência disso e realmente olhar para esse padrão em que eu costumo entrar com aquela pessoa. Por que e como eu posso me livrar disso.

    Tabus
    O que, na sociedade brasileira, é um tabu de se conversar? Se você falar de certo assunto, vai ter muitos problemas, será punido, rejeitado socialmente e corre o risco de ser agredido fisicamente.

    Uma coisa que é um tabu nos Estados Unidos é um apoiador do Trump e um do Biden sentarem lado a lado para uma refeição. Como se um lado fosse contaminar o outro. O medo não é infundado.

    Acho que, no momento em que uma pessoa de um lado olha alguém de sua tribo se associar com alguém da tribo oposta, bem, um tabu da associação acaba de ser quebrado. “Você está traindo nossa tribo? Está traindo nossa gente? Não fale com eles!” Bem, mas como você pode solucionar um conflito, como você reduz as polarizações se os lados não conversam.

    Ataque ao sagrado
    Toda tribo política, toda família tem crenças, valores que considera sagrados. Se você sente que eu a ofendi ou ameacei aquilo que para você tem grande importância, religioso ou secular, bum, voltamos à mentalidade tribal.

    A imagem na minha cabeça é a de uma cobra que salta sobre você, quando você ofende algo muito sagrado. Também penso que no contexto moderno, politicamente as pessoas podem tornar alguns assuntos sagrados, e são eles que constroem as tribos internas e tornam mais difícil a reconciliação com outros grupos.

    Política de identidade
    Uso o termo de maneira um pouco diferente daquilo que foi popularizado. Acho que as pessoas normalmente veem políticas de identidade [no Brasil, o termo é traduzido mais frequentemente como políticas identitárias] como a reunião de grupos minoritários para tentar criar mudanças políticas em certos assuntos.

    Para mim, política de identidade é o uso ou o mau uso da identidade para atingir certos objetivos políticos. Quando um líder diz “nós temos que nos unir para melhorar o nosso país.” Temos que nos unir, mas quem é esse “nós” a que ele se refere? Na maior parte das vezes, não vale para todo mundo, vale somente para alguns grupos, e cria, com frequência de maneira explícita, uma dinâmica de nós contra eles. Nós temos que nos unir para brigar como eles. Políticas de identidade podem ser usadas para dividir.

    Meu conselho: em uma democracia funcional, use política de identidade para unir. Foque em um “nós” mais amplo. Todos nós juntos. Sim, temos tribos menores com interesses políticos. Ótimo. E todos somos parte de um projeto maior. Você pode fazer o que Mandela fez, você pode unir, usar essas políticas de identidade para unir.

    O efeito da tribo
    A pandemia colocou muito estresse em todos. Ansiedade, a dor de perder um parente, perder alguém próximo –e meu coração está com o Brasil, sei que sofreu muito. Acho que isso é um fardo, um peso emocional que está sobre os ombros de todos.

    Enquanto isso, devemos fazer tudo o que fazíamos antes, agora no Zoom. Todos esses fardos emocionais nos fazem, em parte, querer buscar algum tipo de segurança emocional e ela pode vir na forma de nos aproximarmos de um grupo ao qual sinto que pertenço. Para uma tribo.

    O problema é que as tensões também começam a aparecer entre os diferentes grupos, que estão muito mais comprimidos agora, estão mais apertados, por conta da pandemia.

    Sob pressão
    Acho que a pandemia está tendo um grande impacto na busca de segurança. A tribo é uma forma de segurança, mas pode facilmente se transformar em uma mentalidade tribal de “nós contra eles”.

    A pandemia comprimiu todos nós. É muito mais difícil passar pelo desagradável. E estar emocionalmente desconfortável é como estar em um campo aberto com seus arquiinimigos. Estamos em um ambiente comprimido e todos esses instintos tribais são ativados com muito mais facilidade.

    Quando há hierarquia
    ​O poder é muito maleável. Acho que existem dezenas de fontes de poder. Algumas pessoas têm mais poder hierárquico. Se eu tenho dinheiro, as pessoas podem atender o telefone com mais frequência.

    Mas, em uma negociação, é útil tentar descobrir quais são minhas fontes de poder. Se não chegarmos a um acordo, o que posso fazer? Há poder, por exemplo, em entender os interesses do outro lado. Se estou negociando com você uma nova política para a empresa e precisamos que o presidente diga sim, que o vice-presidente aceite. Quanto mais eu entendo os interesses dos outros, mais poderoso sou.

    Saúde mental no trabalho
    Um dos conceitos mais fundamentais, em minha opinião, é o poder da apreciação. Apreciação, da forma como uso, não significa apenas dizer obrigado, não é gratidão. O que quero dizer é um entendimento profundo daquilo que a outra pessoa vive.

    Enquanto passamos pela pandemia, acredito que, como seres humanos, precisamos de mais apreço do que nunca. Então, no local de trabalho, talvez a liderança executiva possa encontrar maneiras de ser um pouco mais gentil com a força de trabalho.

    Não significa que as expectativas são menores, mas que o apoio é maior. É dizer ‘estamos aqui para você emocionalmente, nos preocupamos com você. Você não é apenas um objeto que está produzindo dinheiro para nós, é um ser humano que valorizamos’.

    Vejo executivos agindo como ‘estou no comando, vou falar e mostrar o quanto sou inteligente’. A pandemia exige muito mais ouvir.

    Ouvir mais, falar menos
    É comum que igualemos negociações a conversas. Nós as chamamos de conversas de negociação [negotiation talks]. Uma terminologia muito melhor seria chamá-las de “ouvintes de negociação” [negotiation listens].

    Porque você vai liderar muito mais a negociação se ouvir. Se você escuta 80% do tempo, e fala 20% do tempo e vice-versa.

    Penso isso também sobre minha própria vida. Sabe, encontre tempo e espaço para dar um passeio, sentar-se em silêncio por dez minutos e absorver o que está acontecendo, reconhecer suas emoções.

    E a razão para isso é que serei muito mais eficaz em minha vida pessoal e profissional se entendo o luto, os sentimentos, ressentimentos ou o desejo por mais amor e conexão. Quanto mais estou ciente de minha própria experiência interna, mais posso realmente interagir com os outros. Estar atento e ouvir é mesmo muito importante.

    Negociando o Inegociável: Como resolver conflitos que parecem impossíveis

    • Preço A partir de R$ 44,92 na versão impressa | R$ 39,90 na versão e-book
    • Autor Daniel Shapiro
    • Editora Globo Livros

    ‘Belonging Is Stronger Than Facts’: The Age of Misinformation (The New York Times)

    nytimes.com

    Max Fisher


    The Interpreter

    Social and psychological forces are combining to make the sharing and believing of misinformation an endemic problem with no easy solution.

    An installation of protest art outside the Capitol in Washington.
    Credit: Jonathan Ernst/Reuters

    Published May 7, 2021; Updated May 13, 2021

    There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

    All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

    We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

    “Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

    It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

    Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

    As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

    This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.

    Framing everything as a grand conflict against scheming enemies can feel enormously reassuring. And that’s why perhaps the greatest culprit of our era of misinformation may be, more than any one particular misinformer, the era-defining rise in social polarization.

    “At the mass level, greater partisan divisions in social identity are generating intense hostility toward opposition partisans,” which has “seemingly increased the political system’s vulnerability to partisan misinformation,” Dr. Nyhan wrote in an earlier paper.

    Growing hostility between the two halves of America feeds social distrust, which makes people more prone to rumor and falsehood. It also makes people cling much more tightly to their partisan identities. And once our brains switch into “identity-based conflict” mode, we become desperately hungry for information that will affirm that sense of us versus them, and much less concerned about things like truth or accuracy.

    Border officials are not mass-purchasing copies of Vice President Kamala Harris’s book, though the false rumor drew attention.
    Credit: Gabriela Bhaskar for The New York Times

    In an email, Dr. Nyhan said it could be methodologically difficult to nail down the precise relationship between overall polarization in society and overall misinformation, but there is abundant evidence that an individual with more polarized views becomes more prone to believing falsehoods.

    The second driver of the misinformation era is the emergence of high-profile political figures who encourage their followers to indulge their desire for identity-affirming misinformation. After all, an atmosphere of all-out political conflict often benefits those leaders, at least in the short term, by rallying people behind them.

    Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.

    “Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.

    “When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.

    How do we fight disinformation? Join Times tech reporters as they untangle the roots of disinformation and how to combat it. Plus we speak to special guest comedian Sarah Silverman. R.S.V.P. to this subscriber-exclusive event.

    “Depending on the platform, especially, humans are very sensitive to social reward,” he said. Research demonstrates that people who get positive feedback for posting inflammatory or false statements become much more likely to do so again in the future. “You are affected by that.”

    In 2016, the media scholars Jieun Shin and Kjerstin Thorson analyzed a data set of 300 million tweets from the 2012 election. Twitter users, they found, “selectively share fact-checking messages that cheerlead their own candidate and denigrate the opposing party’s candidate.” And when users encountered a fact-check that revealed their candidate had gotten something wrong, their response wasn’t to get mad at the politician for lying. It was to attack the fact checkers.

    “We have found that Twitter users tend to retweet to show approval, argue, gain attention and entertain,” researcher Jon-Patrick Allem wrote last year, summarizing a study he had co-authored. “Truthfulness of a post or accuracy of a claim was not an identified motivation for retweeting.”

    In another study, published last month in Nature, a team of psychologists tracked thousands of users interacting with false information. Republican test subjects who were shown a false headline about migrants trying to enter the United States (“Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”) mostly identified it as false; only 16 percent called it accurate. But if the experimenters instead asked the subjects to decide whether to share the headline, 51 percent said they would.

    “Most people do not want to spread misinformation,” the study’s authors wrote. “But the social media context focuses their attention on factors other than truth and accuracy.”

    In a highly polarized society like today’s United States — or, for that matter, India or parts of Europe — those incentives pull heavily toward ingroup solidarity and outgroup derogation. They do not much favor consensus reality or abstract ideals of accuracy.

    As people become more prone to misinformation, opportunists and charlatans are also getting better at exploiting this. That can mean tear-it-all-down populists who rise on promises to smash the establishment and control minorities. It can also mean government agencies or freelance hacker groups stirring up social divisions abroad for their benefit. But the roots of the crisis go deeper.

    “The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”

    In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”

    How Facebook got addicted to spreading misinformation (MIT Tech Review)

    technologyreview.com

    Karen Hao, March 11, 2021


    Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

    It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

    As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

    The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

    In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

    Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

    Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

    Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

    In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

    He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

    I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

    Joaquin Quinonero Candela
    Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

    But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

    By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

    The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

    In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

    “When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

    “They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

    In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

    Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

    His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

    Joaquin Quinonero Candela
    Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

    Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

    Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

    Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

    News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

    Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

    They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

    “That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

    Joaquin Quiñonero Candela

    In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

    Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

    Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

    With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

    If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

    But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

    While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

    “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

    A former AI researcher who joined in 2018

    In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

    Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

    The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

    But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

    One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

    Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

    That former employee, meanwhile, no longer lets his daughter use Facebook.

    Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

    It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

    At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

    Joaquin Quinonero Candela

    Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

    Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

    “I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

    At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

    The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

    And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

    Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

    On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

    For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

    Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

    On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

    It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

    This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

    (Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

    But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

    Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

    A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

    But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

    A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

    The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

    Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

    In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

    Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

    But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

    A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

    “[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

    When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

    “It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

    “We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

    Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

    Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

    But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

    This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

    In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

    All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

    The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

    But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

    “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

    Ellery Roberts Biddle, editorial director of Ranking Digital Rights

    This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

    And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

    Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

    Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

    Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

    The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

    In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

    I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

    Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

    I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

    Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

    I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

    “I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

    Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

    5 Pandemic Mistakes We Keep Repeating (The Atlantic)

    theatlantic.com

    Zeynep Tufekci

    February 26, 2021


    We can learn from our failures.
    Photo illustration showing a Trump press conference, a vaccine syringe, and Anthony Fauci
    Alex Wong / Chet Strange/ Sarah Silbiger / Bloomberg / Getty / The Atlantic

    When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.

    One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.

    The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.

    This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.

    Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.

    The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.

    Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.

    The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.

    Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.

    Risk Compensation

    One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.

    The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.

    But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefits overwhelm the ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predicted that wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmed exactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.

    Rules in Place of Mechanisms and Intuitions

    Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.

    In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.

    All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.

    Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.

    Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.

    After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.

    Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.

    Outdoors? It’s the opposite.

    There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.

    And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.

    We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.

    Scolding and Shaming

    Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.

    Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?

    On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.

    While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditions necessary for more people to keep themselves safe.

    And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.

    Harm Reduction

    Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.

    As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””

    Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.

    Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.

    And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.

    The Balance Between Knowledge And Action

    Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.

    Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.

    Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.

    And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.

    At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.

    Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.

    Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”

    Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.

    As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.

    Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.

    These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.

    And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.

    Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.

    Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.

    So why isn’t this story more widely appreciated?

    Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.

    Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”

    So we didn’t get our initial vaccine jubilation.

    But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.

    People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.

    Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.

    As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.

    And yet, from the beginning, a good chunk of the public-facing messaging and news articles implied or claimed that vaccines won’t protect you against infecting other people or that we didn’t know if they would, when both were false. I found myself trying to convince people in my own social network that vaccines weren’t useless against transmission, and being bombarded on social media with claims that they were.

    What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.

    A year into the pandemic, we’re still repeating the same mistakes.

    The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.

    Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.

    An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.

    You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.

    There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.

    And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.

    The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.

    Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.

    The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.

    Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associated with higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.

    Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.

    For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.

    New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”

    Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.

    Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.

    But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.

    Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.

    Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.

    And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.

    But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.

    Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.

    Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.

    We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.

    Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”

    Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.

    People with extremist views less able to do complex mental tasks, research suggests (The Guardian)

    theguardian.com

    Natalie Grover, 22 Feb 2021


    Cambridge University team say their findings could be used to spot people at risk from radicalisation
    Head jigsaw puzzle
    A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto

    Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.

    Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.

    The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.

    The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.

    Overall, the researchers found that ideological attitudes mirrored cognitive decision-making, according to the study published in the journal Philosophical Transactions of the Royal Society B.

    A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.

    “Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.

    She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”

    Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.

    “For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.

    In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.

    “It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”

    The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.

    The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.

    “What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”

    Perdoe e se liberte (AEON)

    aeon.co

    The First Cloud (1888) by William Quiller Orchardson. Courtesy the Tate Gallery/Wikipedia
    As mágoas – as suas ou aquelas que outros lhe causam – mantêm você preso. A terapia do perdão pode ajudá-lo a mudar de perspectiva e seguir adiante com a sua vida

    Nathaniel Wade – 14 de agosto de 2020

    Quando eu tinha 26 anos, meu mundo desmoronou. Eu tinha acabado de começar a pós-graduação e viajava constante entre Richmond, Virgínia e Washington, DC, porque minha esposa estava terminando sua pós-graduação em uma cidade diferente de onde eu estudava. Em uma dessas viagens, eu estava lavando roupa e encontrei um bilhete amassado no fundo da secadora. Estava endereçado a minha esposa por um de seus colegas de classe: “Devemos sair em horários diferentes. Te encontro em minha casa mais tarde”.

    Minha esposa estava tendo um caso, embora não tenha sido confirmado até meses depois. Para mim, foi um golpe de proporções monumentais. Eu me senti traído, enganado e até ridicularizado. A raiva explodiu em mim e, ao longo de dias e semanas, essa raiva se transformou em uma confusão fervilhante de amargura, confusão e descrença. Nós nos separamos sem um plano claro para o futuro.

    Embora essa dor me apunhalasse com uma intensidade que eu nunca havia sentido, eu não era o único a passar por isso. Muitas pessoas experimentam dores semelhantes, e muito piores, em suas vidas. Estar em um relacionamento geralmente significa ser maltratado, magoado ou traído. Como pessoas, frequentemente sofremos injustiças e dificuldades de relacionamento. Uma das maneiras que os humanos desenvolveram para lidar com essa dor é por meio do perdão. Mas o que é perdão e como funciona?

    Essas eram as questões nas quais eu estava trabalhando ao mesmo tempo em que passava por minha separação. Eu estava fazendo pós-graduação na Virginia Commonwealth University, e o psicólogo Everett Worthington era o meu orientador. Ev é um dos dois pioneiros na psicologia do perdão e, desde o primeiro dia, ele me fez explorar o perdão de uma perspectiva acadêmica (deixei seu escritório depois de nosso primeiro encontro com uma pilha de meio metro de artigos científicos para revisar). Desde então, tornei-me psicólogo e professor de aconselhamento psicológico na Iowa State University, com especialização em perdão como parte do processo de psicoterapia.

    Os primeiros trabalhos produzidos por Worthington e por mim, e por outros pesquisadores, identificaram o que o perdão não era. Robert Enright, da Universidade de Wisconsin-Madison, outro pioneiro na psicologia do perdão, foi fundamental neste trabalho. Por exemplo, ele e seus colegas distinguiam entre perdoar e tolerar, desculpar ou ignorar uma ofensa. Para que o verdadeiro perdão ocorra, afirmaram, é necessário que haja uma verdadeira ofensa ou mágoa, com consequências reais. Uma boa ilustração pode ser a dos clientes que Enright e uma de suas alunas, Suzanne Freedman (agora professora da University of Northern Iowa), descreveram em um artigo: mulheres sobreviventes de incesto infantil. Para que o verdadeiro perdão ocorresse neste contexto, argumentavam, as mulheres precisavam primeiro reconhecer que uma mágoa real lhes fora infligida quando crianças. Negar sua própria dor ou ignorar a atrocidade não seria perdão. E, se viesse, o perdão só ocorreria depois de trabalhar a difícil realidade do que aconteceu. Ao longo de muitos meses e através de um trabalho pessoal desafiador, as mulheres do estudo resolveram grande parte do medo, amargura, raiva, confusão e mágoa, e alcançaram um nível notável de paz e resolução em relação aos abusos anteriores.

    Outra questão principal que se tornou rapidamente aparente na pesquisa foi se a reconciliação precisava fazer parte do perdão ou não. Para acadêmicos e terapeutas como eu, interessados ​​em ajudar as pessoas a obter o perdão por ofensas muitas vezes graves, como infidelidade conjugal ou violências do passado, o perdão é restrito a um processo interno. Assim, o perdão não inclui necessariamente a reconciliação, mas é o processo interno pelo qual alguém resolve a amargura e a mágoa e se move para algo mais positivo em relação à pessoa que o ofendeu, como empatia ou amor. Em contraste, a reconciliação é um processo pelo qual as pessoas restabelecem um relacionamento de confiança com alguém que as magoou. Essa distinção tornou-se fundamental em minha própria cura.

    Embora esta distinção seja importante, não significa que a reconciliação não seja uma opção valiosa para aqueles de nós que vêem o perdão desta forma. Em vez disso, a reconciliação se torna um processo separado, independente do perdão, mas importante e valioso por si só. Isso foi um bálsamo considerável para mim nos meses que se seguiram à minha separação. Apesar da dor, raiva e confusão que ainda sentia meses depois, eu sabia que gostaria de buscar o perdão em algum momento no futuro. Eu não queria que minha amargura do passado contagiasse minha felicidade futura em relacionamentos amorosos. Eu não queria carregar esse fardo pelo resto da minha vida. Em vez disso, imaginei um momento em que gostaria de deixar isso de lado e seguir em frente. Meu verdadeiro medo, porém, era que, ao perdoar, eu necessariamente tivesse que me reconciliar com minha esposa ou, alternativamente, que se eu não quisesse me reconciliar, não me livraria da raiva. Ao ver o perdão como um processo separado da reconciliação, novas opções apareceram. Entendi então que poderia perdoar ou não, e poderia me reconciliar ou não.

    Um processo semelhante ocorreu para muitos clientes com quem trabalhei. Por exemplo, lembro-me do alívio sensível que senti em um grupo de pessoas que estava tratando quando trouxe à tona a diferença entre perdão e reconciliação. Os membros desse grupo estavam lutando contra violências diversas, de serem financeiramente roubados por um ex a casos de traição e outras experiências negativas. Quando apresentei a possível distinção entre perdão e reconciliação e discutimos como isso poderia acontecer em suas próprias experiências, senti um suspiro coletivo. Houve um peso tirado dos ombros dos participantes simplesmente pelo entendimento de que perdoar não significa necessariamente reconciliar. Os membros do grupo sentiram-se mais livres e isso ajudou em seus processos de perdão de maneiras novas e ricas.

    Por exemplo, Jo (nome fictício) estava sofrendo com um noivo que lhe roubou dez mil dólares e desapareceu. Obviamente, não havia maneira de Jo trabalhar na reconciliação, mesmo que ela quisesse, e ainda assim, com essa distinção, ela podia ver como ela ainda poderia seguir em frente com o perdão.

    Por outro lado, Maria, que trabalhava para perdoar a filha adulta pelas coisas que a magoara, queria manter o relacionamento; ela estava muito interessada em reconciliação. Compreender a diferença ajudou-a a ver que ela poderia trabalhar tanto no perdão quanto na reconciliação de maneiras diferentes para ajudar a curar seu relacionamento com a filha.

    Em suma, uma compreensão adequada parece ajudar as pessoas a aceitar o perdão e abre novas possibilidades de cura e crescimento. Mas como funciona e de que forma as pessoas podem usá-lo para seu próprio benefício?

    Passei a maior parte da minha carreira acadêmica tentando responder a essa pergunta. Especificamente, estudei maneiras de ajudar as pessoas a perdoar os outros quando têm dificuldade para fazê-lo. A ciência sobre isso ainda é muito nova, mas parece haver um núcleo comum de intervenções que fornecem ajuda para que as pessoas caminhem em direção à resolução de suas feridas.

    A primeira é uma estratégia testada e comprovada em quase todas as formas de psicoterapia: compartilhar a história pessoal em um ambiente seguro e sem julgamento. Quase todas as intervenções de perdão estabelecidas prescrevem um momento para compartilhar a mágoa ou ofensa. Isso é particularmente poderoso em um ambiente de grupo, no qual os participantes compartilham suas experiências diferentes uns com os outros, testemunham suas dores e se apoiam mutuamente. No entanto, contar a própria história de forma individual é também eficaz, em um contexto em que não se tenta dar conselhos, não se diminui a importância de sentimentos negativos e não se estimula a raiva (evitando reações como “sim, ele é a pior pessoa do mundo!”). Frequentemente, em nossos programas de perdão, os participantes nos dizem que uma das partes mais importantes e eficazes é a oportunidade de compartilhar com os outros o que lhes aconteceu. Afirmam que a parte mais útil costuma ser “saber que outros tiveram dificuldades semelhantes” e “ser capaz de desabafar, podendo dizer ali coisas que eu não poderiam ser ditas em outros lugares” e “sentir que foi ouvido, realmente compreendido e que poderia tirar isso do peito”.

    Essa reação é compreensível, visto como pode ser difícil falarmos sobre momentos em que fomos magoados ou agredidos. Para alguns, é difícil compartilhar porque vítimas de violência em geral sentem vergonha e humilhação com a sua situação. Poucas pessoas querem compartilhar abertamente os momentos em que foram fracas ou maltratadas, traídas ou rejeitadas. São histórias de vulnerabilidade. Além da vergonha que as pessoas sentem, muitas vezes há o desejo de evitar a dor associada à mágoa: se eu compartilhar, terei que reviver a dor e talvez não seja capaz de lidar com isso. As intervenções que podem ajudar as pessoas a superar esses obstáculos, compartilhar sua dor e receber apoio podem ser de grande ajuda para ajudá-las a se recuperar.

    Após uma recontagem completa da história, a maioria das intervenções oferece um tempo para as pessoas considerarem o ponto de vista do ofensor. O objetivo geralmente é ajudar as pessoas a desenvolver compreensão ou até empatia pela pessoa que as magoou. Existe um grande poder na empatia, ainda que existam também perigos envolvidos aí.

    Três anos depois de encontrar aquele bilhete amassado, pedi o divórcio e segui em frente com um novo espírito de perdão

    Quando bem feita, esta parte da intervenção ajuda as pessoas a expandirem sua perspectivas e ganharem nova consciência para as complexidades dos eventos que cercam suas feridas. Isso pode leva-las a uma visão mais ampla dos eventos, fazendo a ofensa parecer-se menos com uma maldade ou com sadismo e mais com uma situação complexa em que alguém tomou decisões prejudiciais ou ruins. Essa mudança de perspectiva e compreensão podem abrir as portas para o perdão. Um excelente exemplo disso é o trabalho de Frederic Luskin, diretor do Stanford Forgiveness Project, e do reverendo Byron Bland, capelão da Universidade de Palo Alto. Em 2000, eles reuniram protestantes e católicos da Irlanda do Norte que haviam perdido parentes devido à violência religiosa naquele país, e criaram um workshop de perdão de uma semana na Universidade de Stanford, na Califórnia. Grande parte dessa experiência foi ajudar cada grupo a ver o outro sob uma luz mais humana, a abandonar a amargura associada ao outro grupo e a alavancar a empatia para avançar em direção ao perdão. Como um participante que perdeu seu pai relatou: “Por anos eu tive ressentimento dos católicos, até vir para Stanford.”

    É claro que, se feito de maneira inadequada ou sem precauções, tentar desenvolver empatia pode reduzir-se a culpar a vítima e encorajar aqueles que foram feridos a questionar ou minimizar seus sentimentos, permitindo que outros os magoem no futuro. A parte importante e difícil desse processo é ajudar as pessoas a manter a legitimidade de sua dor enquanto exploram outros pontos de vista. O objetivo é ajudar as pessoas a aceitarem seus sentimentos como compreensíveis e suas reações como justificadas, mesmo enquanto desenvolvem uma apreciação mais nuançada da perspectiva da pessoa ofensora. Isso leva tempo e muitas vezes não deve tentado até que um período considerável tenha decorrido desde a ofensa. A quantidade de tempo depende de muitos fatores, como a gravidade da mágoa e o relacionamento que se tem com a pessoa que o ofendeu.

    Em minha própria jornada de perdão, foi de grande valia o compartilhamento da experiência e o desenvolvimento da empatia. Recebi ajuda considerável de vários parentes e amigos e de um terapeuta atencioso que ouviu minha história sem julgar o que eu deveria ou não fazer. Em vez disso, eles todos me ouviram, apoiaram-me em minha dor e permitiram que eu me expressasse livremente. Meu melhor amigo suportou o peso disso tudo. Tínhamos marcado uma viagem à praia no mesmo verão em que encontrei aquele bilhete para minha esposa. Eu a confrontei um pouco antes da viagem, e ela admitiu o caso pela primeira vez pouco antes de meu amigo e eu partirmos em nossa viagem. Passei dois dias na praia na Carolina do Norte vomitando minha raiva e confusão, compartilhando história após história de todos os pequenos enganos e equívocos que só agora eu estava juntando. Como ele tolerou tudo isso, eu não sei. Mas, para mim, foi um descarrego inicial que me ajudou a caminhar em direção ao perdão definitivo.

    A parte importante seguinte na minha jornada de perdão foi construir empatia por minha ex-esposa. Isso não aconteceu imediatamente. Na verdade, tardou muitos anos até que eu fosse capaz de desenvolver uma nova perspectiva sobre a questão. Foi necessário esse tipo de distância até que eu me tornasse humilde o suficiente para ver como eu mesmo contribuí para o fim do relacionamento. Eu vi minha parte. Eu vi como ela pode ter se sentido aprisionada por mim, pela família e pelos amigos para entrar em um casamento que parecia invejável para estranhos, mas muito provavelmente nunca foi totalmente confortável para ela. Comecei a ver como essas forças podem tê-la influenciado a fazer as escolhas que fez. Agora posso sentir por ela e quão difícil e confuso tudo isso pode ter sido, e posso ver que ela provavelmente não tinha intenção ou desejo de me machucar. Ela se sentiu aprisionada e reagiu a essa experiência. Longe de tudo isso e distante daquela dor que senti, posso dizer que eu realmente queria o que era melhor para ela. Eu esperava que ela tivesse uma vida plena. Por fim, optei por perdoar minha esposa e optei por não me reconciliar. Três anos depois de encontrar aquele bilhete amassado na secadora, decidi pedir o divórcio e segui em frente com um novo espírito de perdão e paz.

    Além de ajudar as pessoas a perdoar os outros, os pesquisadores também começaram a explorar maneiras de ajudar as pessoas a perdoar a si mesmas. Marilyn Cornish, psicóloga conselheira da Auburn University, no Alabama, e eu desenvolvemos uma dessas intervenções, com base em um modelo amplo de quatro etapas. As etapas incluem: responsabilidade, remorso, restauração e renovação. Concentramos essa intervenção em ajudar as pessoas que carregavam consigo uma grande culpa por ter ferido outras pessoas.

    A abordagem geral de nossa intervenção é ajudar as pessoas a assumirem as devidas responsabilidades pela ofensa ou ferida que causaram, identificando as formas pelas quais elas são culpadas pela dor da outra pessoa. Fora dessa responsabilidade, elas são incentivadas a identificar e expressar o remorso que sentem. Acreditamos que é saudável abraçar nossa culpa e colocar esse sentimento em um contexto realista. A partir deste ponto, é possível avançar para a restauração. Nesta etapa, a pessoa é incentivada a fazer reparações, a restaurar os danos causados ​​aos outros e a seus relacionamentos e a se comprometer novamente com valores ou padrões que possam ter violado ao magoar os outros. Finalmente, a pessoa é capaz de passar para a renovação, que entendemos ser uma substituição da culpa e da autocondenação por um renovado autorrespeito e autocompaixão. Essa renovação é apropriada somente após uma verdadeira contabilidade da ofensa. Uma vez que isso tenha sido feito, é benéfico para a pessoa mudar para um senso renovado de autoaceitação e perdão.

    O perdão a si mesmo a ajudou a enfrentar os filhos com mais honestidade e a restaurar o relacionamento com eles.

    Testamos essa intervenção em um estudo clínico. Para isso, convidamos pessoas que haviam magoado outras pessoas e queriam se perdoar a participarem de um programa de aconselhamento individual de oito semanas. Das 21 pessoas que completaram o estudo, 12 receberam o tratamento imediatamente e nove o receberam após estarem na lista de espera. Aqueles que receberam o tratamento imediatamente relataram autoperdão significativamente maior e significativamente menos autocondenação e sofrimento psicológico do que aqueles na lista de espera. Na verdade, depois de controlar sua autocondenação e autoperdão, a pessoa média que recebeu o tratamento foi mais indulgente do que aproximadamente 90% das pessoas na lista de espera. Além disso, uma vez que aqueles na lista de espera receberam o tratamento, sua mudança na autocondenação, no perdão a si mesmo e na angústia psicológica igualou o grupo de tratamento.

    Vários meses após a conclusão do estudo, recebi um e-mail de uma das clientes. Vou chamá-la de Izzie. Ela escreveu para nos agradecer pelo aconselhamento; ela disse que mudou sua vida. Izzie entrou no estudo porque estava lutando com as implicações de ter tido um caso extraconjugal no passado. Além de se sentir sozinha e desconectada da família como resultado do divórcio que se seguiu, Izzie ainda lutava com a vergonha e a culpa de suas ações. Essa vergonha a levou a se afastar dos filhos e, então, a sentir mais culpa e vergonha por sua incapacidade de cuidá-los e ser a mãe que desejava ser. Em seu e-mail, ela detalhou como o processo de autoperdão a ajudou a assumir a responsabilidade pelos eventos de maneira apropriada e superar o remorso para renovar seus relacionamentos. Ela nos contou como conseguiu encarar os filhos com mais honestidade e ter um relacionamento restaurado com eles. Depois de ter investido tanto tempo em sua própria autocondenação, ela agora estava livre para se relacionar com eles de uma nova maneira e ser mais a mãe que ela queria, e eles precisavam que ela fosse.

    O perdão, dos outros e de si mesmo, pode ser um processo poderoso de mudança de vida. Pode mudar a trajetória de um relacionamento ou até mesmo a vida de uma pessoa. Não é a única resposta que uma pessoa pode dar ao ser magoado ou magoar os outros, mas é uma forma eficaz de administrar os momentos inevitáveis ​​de conflito, decepção e dor em nossas vidas. O perdão abrange tanto a realidade da ofensa quanto a empatia e compaixão necessárias para seguir em frente. O verdadeiro perdão não foge da responsabilidade, recompensa ou justiça. Por definição, ele reconhece que algo doloroso, até mesmo errado, foi feito. Simultaneamente, o perdão nos ajuda a abraçar algo além da reação imediata de raiva e dor e da amargura latente que pode resultar. O perdão incentiva uma compreensão mais profunda e compassiva de que todos nós temos falhas em nossas diferentes maneiras e que todos nós precisamos ser perdoados às vezes.

    Is it better to give than receive? (Science Daily)

    Children who experienced compassionate parenting were more generous than peers

    Date: December 1, 2020

    Source: University of California – Davis

    Summary: Young children who have experienced compassionate love and empathy from their mothers may be more willing to turn thoughts into action by being generous to others, a University of California, Davis, study suggests. Lab studies were done of children at ages 4 and 6.


    Child holding present | Credit: © ulza / stock.adobe.com
    Child holding present (stock image). Credit: © ulza / stock.adobe.com

    Young children who have experienced compassionate love and empathy from their mothers may be more willing to turn thoughts into action by being generous to others, a University of California, Davis, study suggests.

    In lab studies, children tested at ages 4 and 6 showed more willingness to give up the tokens they had earned to fictional children in need when two conditions were present — if they showed bodily changes when given the opportunity to share and had experienced positive parenting that modeled such kindness. The study initially included 74 preschool-age children and their mothers. They were invited back two years later, resulting in 54 mother-child pairs whose behaviors and reactions were analyzed when the children were 6.

    “At both ages, children with better physiological regulation and with mothers who expressed stronger compassionate love were likely to donate more of their earnings,” said Paul Hastings, UC Davis professor of psychology and the mentor of the doctoral student who led the study. “Compassionate mothers likely develop emotionally close relationships with their children while also providing an early example of prosocial orientation toward the needs of others,” researchers said in the study.

    The study was published in November in Frontiers in Psychology: Emotion Science. Co-authors were Jonas G. Miller, Department of Psychiatry and Behavioral Sciences, Stanford University (who was a UC Davis doctoral student when the study was written); Sarah Kahle of the Department of Psychiatry and Behavioral Sciences, UC Davis; and Natalie R. Troxel, now at Facebook.

    In each lab exercise, after attaching a monitor to record children’s heart-rate activity, the examiner told the children they would be earning tokens for a variety of activities, and that the tokens could be turned in for a prize. The tokens were put into a box, and each child eventually earned 20 prize tokens. Then before the session ended, children were told they could donate all or part of their tokens to other children (in the first instance, they were told these were for sick children who couldn’t come and play the game, and in the second instance, they were told the children were experiencing a hardship.)

    At the same time, mothers answered questions about their compassionate love for their children and for others in general. The mothers selected phrases in a survey such as:

    • “I would rather engage in actions that help my child than engage in actions that would help me.”
    • “Those whom I encounter through my work and public life can assume that I will be there if they need me.”
    • “I would rather suffer myself than see someone else (a stranger) suffer.”

    Taken together, the findings showed that children’s generosity is supported by the combination of their socialization experiences — their mothers’ compassionate love — and their physiological regulation, and that these work like “internal and external supports for the capacity to act prosocially that build on each other.”

    The results were similar at ages 4 and 6.

    In addition to observing the children’s propensity to donate their game earnings, the researchers observed that being more generous also seemed to benefit the children. At both ages 4 and 6, the physiological recording showed that children who donated more tokens were calmer after the activity, compared to the children who donated no or few tokens. They wrote that “prosocial behaviors may be intrinsically effective for soothing one’s own arousal.” Hastings suggested that “being in a calmer state after sharing could reinforce the generous behavior that produced that good feeling.”

    This work was supported by the Fetzer Institute, Mindfulness Connections, and the National Institute of Mental Health.


    Story Source:

    Materials provided by University of California – Davis. Original written by Karen Nikos-Rose. Note: Content may be edited for style and length.


    Journal Reference:

    1. Jonas G. Miller, Sarah Kahle, Natalie R. Troxel, Paul D. Hastings. The Development of Generosity From 4 to 6 Years: Examining Stability and the Biopsychosocial Contributions of Children’s Vagal Flexibility and Mothers’ Compassion. Frontiers in Psychology, 2020; 11 DOI: 10.3389/fpsyg.2020.590384

    Hoarding and herding during the COVID-19 pandemic (Science Daily)

    The coronavirus pandemic has triggered some interesting and unusual changes in our buying behavior

    Date: September 10, 2020

    Source: University of Technology Sydney

    Summary: Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper.

    Rushing to stock up on toilet paper before it vanished from the supermarket isle, stashing cash under the mattress, purchasing a puppy or perhaps planting a vegetable patch — the COVID-19 pandemic has triggered some interesting and unusual changes in our behavior.

    Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper published in the Journal of Behavioral Economics for Policy.

    ‘Hoarding in the age of COVID-19’ by behavioral economist Professor Michelle Baddeley, Deputy Dean of Research at the University of Technology Sydney (UTS) Business School, examines a range of cross-disciplinary explanations for hoarding and other behavior changes observed during the pandemic.

    “Understanding these economic, social and psychological responses to COVID-19 can help governments and policymakers adapt their policies to limit negative impacts, and nudge us towards better health and economic outcomes,” says Professor Baddeley.

    Governments around the world have implemented behavioral insights units to help guide public policy, and influence public decision-making and compliance.

    Hoarding behavior, where people collect or accumulate things such as money or food in excess of their immediate needs, can lead to shortages, or in the case of hoarding cash, have negative impacts on the economy.

    “In economics, hoarding is often explored in the context of savings. When consumer confidence is down, spending drops and households increase their savings if they can, because they expect bad times ahead,” explains Professor Baddeley.

    “Fear and anxiety also have an impact on financial markets. The VIX ‘fear’ index of financial market volatility saw a dramatic 564% increase between November 2019 and March 2020, as investors rushed to move their money into ‘safe haven’ investments such as bonds.”

    While shifts in savings and investments in the face of a pandemic might make economic sense, the hoarding of toilet paper, which also occurred across the globe, is more difficult to explain in traditional economic terms, says Professor Baddeley.

    Behavioural economics reveals that our decisions are not always rational or in our long term interest, and can be influenced by a wide range of psychological factors and unconscious biases, particularly in times of uncertainty.

    “Evolved instincts dominate in stressful situations, as a response to panic and anxiety. During times of stress and deprivation, not only people but also many animals show a propensity to hoard.”

    Another instinct that can come to the fore, particularly in times of stress, is the desire to follow the herd, says Professor Baddeley, whose book ‘Copycats and Contrarians’ explores the concept of herding in greater detail.

    “Our propensity to follow others is complex. Some of our reasons for herding are well-reasoned. Herding can be a type of heuristic: a decision-making short-cut that saves us time and cognitive effort,” she says.

    “When other people’s choices might be a useful source of information, we use a herding heuristic and follow them because we believe they have good reasons for their actions. We might choose to eat at a busy restaurant because we assume the other diners know it is a good place to eat.

    “However numerous experiments from social psychology also show that we can be blindly susceptible to the influence of others. So when we see others rushing to the shops to buy toilet paper, we fear of missing out and follow the herd. It then becomes a self-fulfilling prophesy.”

    Behavioral economics also highlights the importance of social conventions and norms in our decision-making processes, and this is where rules can serve an important purpose, says Professor Baddeley.

    “Most people are generally law abiding but they might not wear a mask if they think it makes them look like a bit of a nerd, or overanxious. If there is a rule saying you have to wear a mask, this gives people guidance and clarity, and it stops them worrying about what others think.

    “So the normative power of rules is very important. Behavioral insights and nudges can then support these rules and policies, to help governments and business prepare for second waves, future pandemics or other global crises.”


    Story Source:

    Materials provided by University of Technology Sydney. Original written by Leilah Schubert. Note: Content may be edited for style and length.


    Journal Reference:

    1. Michelle Baddeley. Hoarding in the age of COVID-19. Journal of Behavioral Economics for Policy, 2020; 4(S): 69-75 [abstract]

    ‘Wild West’ mentality lingers in modern populations of US mountain regions (Phys.org)

    phys.org

    by University of Cambridge. September 7, 2020

    mountainous territory
    Credit: Pixabay/CC0 Public Domain

    When historian Frederick Jackson Turner presented his famous thesis on the US frontier in 1893, he described the “coarseness and strength combined with acuteness and acquisitiveness” it had forged in the American character.

    Now, well into the 21st century, researchers led by the University of Cambridge have detected remnants of the pioneer personality in US populations of once inhospitable mountainous territory, particularly in the Midwest.

    A team of scientists algorithmically investigated how landscape shapes psychology. They analyzed links between the anonymised results of an online personality test completed by over 3.3 million Americans, and the “topography” of 37,227 US postal—or ZIP—codes.

    The researchers found that living at both a higher altitude and an elevation relative to the surrounding region—indicating “hilliness”—is associated with a distinct blend of personality traits that fits with “frontier settlement theory”.

    “The harsh and remote environment of mountainous frontier regions historically attracted nonconformist settlers strongly motivated by a sense of freedom,” said researcher Friedrich Götz, from Cambridge’s Department of Psychology.

    “Such rugged terrain likely favored those who closely guarded their resources and distrusted strangers, as well as those who engaged in risky explorations to secure food and territory.”

    “These traits may have distilled over time into an individualism characterized by toughness and self-reliance that lies at the heart of the American frontier ethos” said Götz, lead author of the study.

    “When we look at personality across the whole United States, we find that mountainous residents are more likely to have psychological characteristics indicative of this frontier mentality.”

    Götz worked with colleagues from the Karl Landsteiner University of Health Sciences, Austria, the University of Texas, US, the University of Melbourne in Australia, and his Cambridge supervisor Dr. Jason Rentfrow. The findings are published in the journal Nature Human Behaviour.

    The research uses the “Big Five” personality model, standard in social psychology, with simple online tests providing high-to-low scores for five fundamental personality traits of millions of Americans.

    The mix of characteristics uncovered by study’s authors consists of low levels of “agreeableness”, suggesting mountainous residents are less trusting and forgiving—traits that benefit “territorial, self-focused survival strategies”.

    Low levels of “extraversion” reflect the introverted self-reliance required to thrive in secluded areas, and a low level of “conscientiousness” lends itself to rebelliousness and indifference to rules, say researchers.

    “Neuroticism” is also lower, suggesting an emotional stability and assertiveness suited to frontier living. However, “openness to experience” is much higher, and the most pronounced personality trait in mountain dwellers.

    “Openness is a strong predictor of residential mobility,” said Götz. “A willingness to move your life in pursuit of goals such as economic affluence and personal freedom drove many original North American frontier settlers.”

    “Taken together, this psychological fingerprint for mountainous areas may be an echo of the personality types that sought new lives in unknown territories.”

    The researchers wanted to distinguish between the direct effects of physical environment and the “sociocultural influence” of growing up where frontier values and identities still hold sway.

    To do this, they looked at whether mountainous personality patterns applied to people born and raised in these regions that had since moved away.

    The findings suggest some “initial enculturation” say researchers, as those who left their early mountain home are still consistently less agreeable, conscientious and extravert, although no such effects were observed for neuroticism and openness.

    The scientists also divided the country at the edge of St. Louis—”gateway to the West”—to see if there is a personality difference between those in mountains that made up the historic frontier, such as the Rockies, and eastern ranges e.g. the Appalachians.

    While mountains continue to be a “meaningful predictor” of personality type on both sides of this divide, key differences emerged. Those in the east are more agreeable and outgoing, while western ranges are a closer fit for frontier settlement theory.

    In fact, the mountainous effect on high levels of “openness to experience” is ten times as strong in residents of the old western frontier as in those of the eastern ranges.

    The findings suggest that, while ecological effects are important, it is the lingering sociocultural effects—the stories, attitudes and education—in the former “Wild West” that are most powerful in shaping mountainous personality, according to scientists.

    They describe the effect of mountain areas on personality as “small but robust”, but argue that complex psychological phenomena are influenced by many hundreds of factors, so small effects are to be expected.

    “Small effects can make a big difference at scale,” said Götz. “An increase of one standard deviation in mountainousness is associated with a change of around 1% in personality.”

    “Over hundreds of thousands of people, such an increase would translate into highly consequential political, economic, social and health outcomes.”



    More information: Physical topography is associated with human personality, Nature Human Behaviour (2020). DOI: 10.1038/s41562-020-0930-x , www.nature.com/articles/s41562-020-0930-x

    Citation: ‘Wild West’ mentality lingers in modern populations of US mountain regions (2020, September 7) retrieved 8 September 2020 from https://phys.org/news/2020-09-wild-west-mentality-lingers-modern.html

    This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.