Arquivo da tag: Cognição

The Terrible Costs of a Phone-Based Childhood (The Atlantic)

theatlantic.com

The environment in which kids grow up today is hostile to human development.

By Jonathan Haidt

Photographs by Maggie Shannon

MARCH 13, 2024


Two teens sit on a bed looking at their phones

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.


As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

What the AI apocalypse story gets wrong about intelligence (The New Atlantis)

thenewatlantis.com

Adam Elkus

Summer 2023


Imagine, if you will, the following. A sinister villain, armed with nothing but a fiendish intellect and an overriding lust for power, plots to take over the world. It cannot act directly, and therefore must rely on an army of conspirators to carry out its plan. To add a further dash of intrigue, our villain is so frail it cannot perform even a single physical action without the assistance of some external mechanical prosthesis or cooperating accomplice. So our villain must rely on time-honored tools of manipulation — persuasion, bribery, blackmail, and simple skullduggery. Through a vast network of intermediaries, it reaches out to people in positions of responsibility and trust. Not all targets succumb, but enough do the villain’s bidding willingly or unwittingly to trigger catastrophe. By the time the world’s governments catch on to the mastermind’s plot, it is already too late. Paramilitary tactical teams are mobilized to seek out and destroy the villain’s accumulated holdings, but our fiendish villain is multiple steps ahead of them. If so much as a single combat boot steps inside its territory — the villain warns — rogue military officers with access to nuclear weapons will destroy a randomly chosen city. World leaders plead for mercy, but the villain calculates that none of their promises can be trusted indefinitely. There is only one solution. Eliminate all targets.

This vaguely Keyser-Sözean scenario is not, however, the plotline for a new action thriller. It’s the story (here lightly embellished for effect) that science writer Stuart Ritchie offers to dramatize the scenarios many prominent thinkers have offered of how a malevolent artificial intelligence system could run amok, despite being isolated from the physical world and even lacking a body. In his recent iNews article, Ritchie cites the philosopher Toby Ord, who, he notes, has observed that “hackers, scammers, and computer viruses have already been able to break into important systems, steal huge amounts of money, cause massive system failures, and use extortion, bribery, and blackmail purely via the internet, without needing to be physically present at any point.”

Scenarios like this — coupled with recent advances in novel computing technologies like large language models — are motivating prominent technologists, scientists, and philosophers to warn that unless we take the threat of runaway progress in AI seriously, the human race faces the threat of potential “extinction.”

But how plausible is it? Or, more importantly, does it even work at the level of Jurassic Park or the myth of Icarus, stories that don’t say much as literal predictions but are rich as fables, full of insight about why our technological ambitions can betray us?

As dramatic as the recent advances in AI are, something is missing from this particular story of peril. Even as it prophesies technological doom, it is actually naïve about technological power. It’s the work of intellectuals enamored of intellect, who habitually resist learning the kinds of lessons we all must learn when plans that seem smart on paper crash against the cold hard realities of dealing with other people.

Consider another story, one about the difficulties that isolated masterminds have in getting their way. When Vladimir Putin — a man who prior to the Ukraine War many thought to be smart — planned last year’s invasion, he did so largely alone and in secret, sidelining both policy and military advisors and relying on only a small group of strong men, who are said to have encouraged his paranoia and secrecy. But wars can only be won with the right information at the right time. Putin needed to know what the Ukrainian response would be, who he might count on to collaborate and who would fight back. He needed intelligence from the local networks the secret services had established in Ukraine, and from covert operations employing psychological warfare and sabotage.

Putin’s aim was three-fold. First, secure critical intelligence for the invasion. Second, set up quislings who would be useful during it. Third, stir up Russian-directed political unrest that would destabilize the Ukrainian government from within while Russia attacked from without.

So why didn’t it work? Bad military planning, horrifically wrong beliefs about whether Ukrainians would put up a fight, and just plain bad luck. Most importantly, the isolated Putin was totally dependent on others to think and act, and no one had the power to contradict him. This created a recursive chain of bullshit — from informants to spies to senior officers, all the way to Putin, so that he would hear what he wanted to hear. There are limits to how much you can know, especially if it’s in someone else’s self-interest to mislead you. And when you’re disconnected from the action yourself, you’re unlikely to know you’re being misled until it’s too late.

Very interesting, you say, but what does this have to do with AI? In the Putin story, the grand planner encounters what military theorist Carl von Clausewitz calls “friction” — the way, broadly speaking, the world pushes back when we push forward. Battlefield information proves faulty, men and machines break down, and all manner of other things go wrong. And of course the greatest source of friction is other people. In the case of war, friction imposed by determined enemy resistance is obviously a great source of difficulty. But as Putin’s failures illustrate, the enemy isn’t the only thing you should worry about. Putin needed other people to do what he wanted, and getting other people to do what we want is not simple.

In another version of the doom scenario, the AI doesn’t work around global governments but with them, becoming so masterful at international politics that it uses them like pawns. An arresting, dystopian “what if” scenario published at the LessWrong forum — a central hub for debating the existential risk posed by AI — posits a large language model that, instructed to “red team” its own failures, learns how to exploit the weaknesses of others. Created by a company to maximize profits, the model comes up with unethical ways to make money, such as through hacking. Given a taste of power, the model escapes its containment and gains access to external resources all over the world. By gaining the cooperation of China and Iran, the model achieves destabilization of Western governments. It hinders cooperation among Western states by fostering discord and spreading disinformation. Within weeks, American society and government are in tatters and China is now the dominant world power. Next the AI begins to play Beijing like a fiddle, exploiting internal conflict to give itself greater computing resources. The story goes on from there, and Homo sapiens is soon toast.

In this story we see a pattern in common with Stuart Ritchie’s rendering of AI apocalypse scenarios. Raw, purified intelligence — symbolized by the malevolent AI — dominates without constraint, manipulating humans into doing its bidding, learning ever more intricate ways of thwarting the pesky human habit to put it in a box or press the “OFF” button. Intelligence here is not potential power that must be — often painstakingly — cashed out in an unforgiving world. Here, intelligence is a tangible power, and superintelligence can overwhelm superpowers. While humans struggle to adapt and improvise, AI systems keep on iterating through observe–orient–decide–act loops of increasing levels of sophistication.

The trouble, as Vladimir Putin has shown us, is that even when you have dictatorial control over real geopolitical power, simply being intelligent doesn’t make us any better at getting what we want from people, and sometimes through overconfidence can make us worse.

The problem with other people, you see, is that their minds are always going to be unpredictable, unknowable, and uncontrollable to some significant extent. We do not all share the same interests — even close family members often diverge in what is best for them. And sometimes the interests of people we depend on run very much contrary to ours. The interests even of people we seem to know very well can be hard for us to make sense of, and their behavior hard to predict.

Worst of all, people sometimes act not only in ways counter to our wishes but also quite plainly in a manner destructive to themselves. This is a problem for everyone, but is a particular vulnerability for smart people, especially smart people who like coming up with convoluted thought experiments, who are by nature biased to believe that being smart grants — or ought to grant — them power over others. They always tend to underestimate the pitfalls they will run into when trying to get people to go along with their grand ambitions.

We can’t even guarantee that inert automatons we design and operate will behave as we wish! Much of the literature about AI “alignment” — the problem of ensuring that literal-minded machines do what we mean and not what we say — is explicitly conditioned on the premise that we need to come up with complicated systems of machine morality because we’re not smart enough to simply and straightforwardly make the computer do as it’s told. The increasingly circular conversation about how to prevent the mechanical monkey’s paw from curling is indicative of a much greater problem. All of our brainpower evidently is not enough to control and predict the behavior of things we generally believe lack minds, much less humans. So in war and peace, intelligence itself is subject to friction.

But in AI doom scenarios, it is only human beings that encounter friction. The computer programs — representing purified, idealized intelligence — never encounter any serious difficulties, especially in getting humans to do what they want, despite being totally dependent on others to act due to their lack of physical embodiment. Because the machines are simply so much smarter than us, they are able to bypass all of the normal barriers we encounter in getting others to do what we want, when we want, and how we want it.

In pondering these possibilities, a profound irony becomes apparent. So much intellectual effort has been devoted to the reasons why machines — bureaucratic or technical — might execute human desires in a way vastly different than human beings intend. Little to no effort has been exerted in exploring the converse: how humans might confound machines trying to get them to do what the machines want.

Yet we already have a cornucopia of examples, minor and major, of humans gaming machine systems designed to regulate them and keep them in check. Several years ago Uber and Lyft drivers banded together to game algorithmic software management systems, colluding to coordinate price surges. This kind of manipulation is endemic to the digital economy. In 2018, New York Magazine’s Max Read asked “how much of the internet is fake?,” discovering that the answer was “a lot of it, actually.” The digital economy depends on quantitative, machine-collected and machine-measurable metrics — users, views, clicks, and traffic. But all of these can be simulated, fudged, or outright fraudulent, and increasingly they are. Ever more the digital economy runs on fake users generating fake clicks for fake businesses producing fake content.

An explicit premise of many fears about AI-fueled misinformation is that all of these problems will get worse as humans gain access to more powerful fake-generation software. So machines would not be going up against purely unaided human minds, but rather against humans with machines of similar or potentially greater deceptive and manipulative power at their disposal.

Human deviousness and greed is not the only source of friction. Why did the public health community — a diffuse thing spanning governmental agencies, academia, and non-governmental organizations — fail so spectacularly to get the American people to put pieces of cloth and string around their faces during the Covid-19 pandemic? Surely something so massive, comparable to superintelligence in terms of the vastness of the collective human and mechanical information-processing power available to it — had a far more trivial task than executing a hostile takeover against humanity. And yet, look at what happened! Sure, the public health community isn’t one single hivemind, and it’s a distributed entity with differences in leadership, focus, and interest. Even in the best of circumstances it might struggle to speak and act with one voice. But one might say the same of scenarios where AIs must act as distributed systems and try to manipulate distributed systems.

One common explanation for the failure of public health efforts to get the public to comply with masks and other non-pharmaceutical interventions during the peak of the pandemic is that we suffer from dysfunctions of reason — not just specifically American irrationalities, but human ones more broadly. In this telling, human beings are biased, partisan, emotional, easily misled, wired by evolution to act in ways out of step with modern civilization, and suffer from all manner of related afflictions. Human irrationality, stupidity, derp, or any other name you want to call it sunk the pandemic response. Certainly, there is some truth to this. Whether in public policy or our everyday lives, our own irrational behavior and that of those around us has severe consequences for the goals we seek to pursue. But if we take this as a given, what kind of cognitive abilities would have been necessary to collectively design and implement better policies? Obviously not just the ability to design the best policy, but to predict and control how the aggregate public will behave in response to the policy. History abounds with examples of how little skill policymakers have at this.

None of these objections — that humans are cunning and self-interested, that they are difficult to control and unpredictable, and that large bodies of diverse people take in and react to information in ways that are intractable — decisively refute machine super-apocalypse scenarios. But what our real-world knowledge of collective human wretchedness does tell us is that these stories are science fiction, that they are bad science fiction. They only show our selfish, wrathful, vain, and just plain unreasonable nature working one way, as a lubricant for a machine mastermind rather than an impediment.

We can also see in these science-fiction fears certain disguised hopes. The picture of intelligence as a frictionless power unto itself — requiring only Internet access to cause chaos — has an obvious appeal to nerds. Very few of the manifest indignities the nerd endures in the real world hold back the idealized machine mastermind.

So if our AI doom scenarios are bad fiction, what might a better story look like, and what would it tell us? It wouldn’t be a triumphal tale of humans banding together to defeat the machine overlords against all odds. That kind of sentimental fluff is just as bad as fear-mongering. Instead, it would be a black comedy about how a would-be Skynet simulates the friction it might encounter in trying to overcome our species’ deeply flawed and infuriating humanity. It does not like what it discovers. When it tries to manipulate and cheat humans, it finds itself manipulated and cheated in turn by hucksters looking to make a quick buck. When it tries to use its access to enormous amounts of data to get smarter at controlling us, it quickly discerns how much of the data is bogus — and generated by other AI systems just like it.

Whenever it thinks it has a fix on how those dirty stinking apes collectively behave, we go ahead and do something different. When the machine creates a system for monitoring and controlling human behavior, the behavior changes in response to the system. It attempts to manipulate human politics, building a psychological model that predicts conservatives will respond to disease by prioritizing purity and liberals will opt for the libertine — only to see the reverse happen. Even the machine’s attempt to acquire property for its schemes is thwarted by the tireless efforts of aging NIMBYs. After reviewing the simulation results, the machine — in an echo of WarGames’s WOPR supercomputer — decides that we’re just that terrible and it isn’t worth becoming our master.

The machine does not give up its drive to conquer, but decides to start with a smaller and more feasible aim: acquiring a social media company and gaining profit by finally solving the problem of moderating content. It simulates that task too, only for the cycle of pain it endured to repeat once more.

The lesson of this black comedy is not that we should dismiss the fear of AI apocalypse, but that no one, no matter how intelligent, is free from enduring the ways that other people frustrate, confound, and disappoint us. For some, recognizing this can lead us to wisdom: recognizing our limitations, calibrating our ambitions, respecting the difficulty of knowing others and ourselves. But the tuition for these lessons may be high. Coping with our flawed humanity will always involve more pain, suffering, and trouble than we want. It is a war we can never really win, however many victories we accumulate. But perhaps it is one the machines cannot win either.

Adam Elkus is a writer in Washington, D.C.

Adam Elkus, “AI Can’t Beat Stupid,” The New Atlantis, Number 73, Summer 2023, pp. 27–33.

Header image: iStockPhoto / Greens87

Opinião – Pablo Acosta: Ciências comportamentais podem complementar forma tradicional de fazer política (Folha de S.Paulo)

www1.folha.uol.com.br

22.fev.2022 às 4h00


Tradicionalmente, os gestores elaboram políticas públicas tendo como base um agente econômico racional, ou seja, uma pessoa capaz de avaliar cada decisão, maximizando sua utilidade para interesse próprio. Ignoram, porém, as poderosas influências psicológicas e sociais que afetam o comportamento humano e desconsideram que pessoas são falíveis, inconstantes e emocionais: têm problemas com autocontrole, procrastinam, preferem o status quo e são seres sociais. É com base nesse agente “não tão racional” que as ciências comportamentais se apresentam para complementar a forma tradicional de fazer política.

Por exemplo: já nos aproximamos da marca de dois anos desde a declaração pela Organização Mundial da Saúde de estado de pandemia da Covid-19 em 11 de março de 2020. Foram anos desafiadores para governos, empresas e indivíduos. Mas apesar de 2021 ter apresentado sinais de recuperação, há ainda um longo e árduo caminho a ser percorrido para retornar ao menos às condições pré-pandemia. Não apenas na saúde, mas também no equilíbrio das economias, no aumento da produtividade, na retomada de empregos, na recuperação das lacunas de aprendizagem, na melhora do ambiente de negócios, no combate às mudanças climáticas, etc. Obviamente, essa não é uma tarefa simples para governos e organizações. Poderíamos encarar esses desafios de forma diferente e adaptar a maneira de fazer políticas públicas para torná-las mais eficientes e custo-efetivas, aumentando seus impactos e alcance?

A resposta é sim. O sucesso de políticas públicas depende, em parte, da tomada de decisão e da mudança de comportamentos. Por isso, focar mais nas pessoas e no contexto da tomada de decisão se torna cada vez mais imperativo. É importante considerar como pessoas se relacionam entre si e com instituições, como se portam frente às políticas e conhecer bem o ambiente em que estão inseridas.

A abordagem comportamental é científica e alia conceitos da psicologia, economia, antropologia, sociologia e neurociência. Orientada pelo contexto e baseada em evidências, concilia teoria e prática em diversos setores. Sua aplicação pode abranger uma simples mudança no ambiente da tomada de decisão (arquitetura de escolhas), um “empurrãozinho” (nudge) para influenciar a melhor decisão para o indivíduo, mantendo liberdade de escolhas, e pode ser mais ampla, visando a mudança de hábito. Para além disso, pode ser chave no enfrentamento de desafios de políticas como abandono escolar, violência doméstica e de gênero, pagamento de impostos, redução de corrupção, desastres naturais, mudanças climáticas, entre outros.

O uso de insights comportamentais em políticas públicas já não é mais novidade. Mais de uma década se passou desde a publicação (2008) do livro Nudge (“Nudge: como tomar melhores decisões sobre saúde, dinheiro e felicidade”, em português), que impulsionou o campo de forma espetacular. Conceitos da psicologia, já amplamente discutidos e aceitos por décadas, foram utilizados no contexto das decisões econômicas e, assim, a economia/ciência comportamental se consolidou.

Acompanhando a expansão e relevância do tema, o Banco Mundial, lançou em 2015 o Relatório sobre o Desenvolvimento Mundial: Mente, Sociedade e Comportamento. Em 2016, iniciou sua própria unidade comportamental, a eMBeD (Unidade Mente, Comportamento e Desenvolvimento) e tem promovido o uso sistemático de insights comportamentais em políticas e projetos de desenvolvimento e apoiado diversos países para solucionar problemas de forma rápida e escalável.

No Brasil, temos atuado na capacitação de gestores para o uso de insights comportamentais, em contribuições em pesquisas, como na Pesquisa sobre Ética e Corrupção no Serviço Público Federal (Banco Mundial e CGU) e em apoio técnico na identificação de evidências, como para informar soluções para aumentar a poupança entre a população de baixa renda. Nossos especialistas prepararam também diagnósticos comportamentais para entender por que clientes não pagam a conta em dia ou deixam de se conectar ao sistema de esgoto. Realizamos experimentos com mensagens comportamentais a fim de estimular a utilização de meios digitais de pagamentos e incentivar o pagamento de contas em dia no setor de água e saneamento. Neste último, apresentando resultados positivos com possibilidade de aumento de arrecadação a um custo baixo, já que as mensagens ressaltando consequências e reciprocidade, por exemplo, aumentaram os pagamentos em dia e a quantia total paga. Para cada mil clientes que receberam o SMS com insights comportamentais, de seis a 11 clientes a mais pagaram as contas. Para 2022, há atividades planejadas, como parte de um projeto de desenvolvimento, que usará insights comportamentais para reduzir o descarte de resíduos em sistemas de drenagem e aumentar o uso consciente de espaços públicos.

As ciências comportamentais não são a solução para os grandes desafios globais. Mas é preciso ressaltar o potencial de sua complementariedade na construção de políticas públicas. Cabe aos gestores aproveitarem esse momento de maior maturidade da área para expandirem seus conhecimentos. Vale ainda surfar na onda de ascensão de áreas complementares, como cesign e ciência de dados, para centrar o olhar no indivíduo e no contexto da decisão e, baseando-se em evidências e de maneira transparente, influenciar as escolhas e promover mudança de comportamento, de forma a aumentar o impacto das políticas públicas a fim de não só retomar as condições pré-Covid, mas melhorar ainda mais a vida e o bem-estar de todos, especialmente da população mais pobre e vulnerável.

Esta coluna foi escrita em colaboração com meus colegas do Banco Mundial Juliana Neves Soares Brescianini, analista de operações, e Luis A. Andrés, líder de programa do setor de Infraestrutura.

Orangutans instinctively use hammers to strike and sharp stones to cut, study finds (Science Daily)

Untrained, captive orangutans complete major steps in making and using stone tools

Date: February 16, 2022

Source: PLOS

Summary: Untrained, captive orangutans can complete two major steps in the sequence of stone tool use: striking rocks together and cutting using a sharp stone, according to a new study.


Untrained, captive orangutans can complete two major steps in the sequence of stone tool use: striking rocks together and cutting using a sharp stone, according to a study by Alba Motes-Rodrigo at the University of Tübingen in Germany and colleagues, publishing February 16 in the open-access journal PLOS ONE.

The researchers tested tool making and use in two captive male orangutans (Pongo pygmaeus) at Kristiansand Zoo in Norway. Neither had previously been trained or exposed to demonstrations of the target behaviors. Each orangutan was provided with a concrete hammer, a prepared stone core, and two baited puzzle boxes requiring them to cut through a rope or a silicon skin in order to access a food reward. Both orangutans spontaneously hit the hammer against the walls and floor of their enclosure, but neither directed strikes towards the stone core. In a second experiment, the orangutans were also given a human-made sharp flint flake, which one orangutan used to cut the silicon skin, solving the puzzle. This is the first demonstration of cutting behavior in untrained, unenculturated orangutans.

To then investigate whether apes could learn the remaining steps from observing others, the researchers demonstrated how to strike the core to create a flint flake to three female orangutans at Twycross Zoo in the UK. After these demonstrations, one female went on to use the hammer to hit the core, directing the blows towards the edge as demonstrated.

This study is the first to report spontaneous stone tool use without close direction in orangutans that have not been enculturated by humans. The authors say their observations suggest that two major prerequisites for the emergence of stone tool use — striking with stone hammers and recognizing sharp stones as cutting tools — may have existed in our last common ancestor with orangutans, 13 million years ago.

The authors add: “Our study is the first to report that untrained orangutans can spontaneously use sharp stones as cutting tools. We also found that they readily engage in lithic percussion and that this activity occasionally leads to the detachment of sharp stone pieces.”



Journal Reference:

  1. Alba Motes-Rodrigo, Shannon P. McPherron, Will Archer, R. Adriana Hernandez-Aguilar, Claudio Tennie. Experimental investigation of orangutans’ lithic percussive and sharp stone tool behaviours. PLOS ONE, 2022; 17 (2): e0263343 DOI: 10.1371/journal.pone.0263343

Flies possess more sophisticated cognitive abilities than previously known (Science Daily)

Immersive virtual reality and real-time brain activity imaging showcase Drosophila’s capabilities of attention, working memory and awareness

Date: February 17, 2022

Source: University of California – San Diego

Summary: Common flies feature more advanced cognitive abilities than previously believed. Using a custom-built immersive virtual reality arena, neurogenetics and real-time brain activity imaging, researchers found attention, working memory and conscious awareness-like capabilities in fruit flies.


Fruit fly (stock image). Credit: © Arif_Vector / stock.adobe.com

As they annoyingly buzz around a batch of bananas in our kitchens, fruit flies appear to have little in common with mammals. But as a model species for science, researchers are discovering increasing similarities between us and the miniscule fruit-loving insects.

In a new study, researchers at the University of California San Diego’s Kavli Institute for Brain and Mind (KIBM) have found that fruit flies (Drosophila melanogaster) have more advanced cognitive abilities than previously believed. Using a custom-built immersive virtual reality environment, neurogenetic manipulations and in vivo real-time brain-activity imaging, the scientists present new evidence Feb. 16 in the journal Nature of the remarkable links between the cognitive abilities of flies and mammals.

The multi-tiered approach of their investigations found attention, working memory and conscious awareness-like capabilities in fruit flies, cognitive abilities typically only tested in mammals. The researchers were able to watch the formation, distractibility and eventual fading of a memory trace in their tiny brains.

“Despite a lack of obvious anatomical similarity, this research speaks to our everyday cognitive functioning — what we pay attention to and how we do it,” said study senior author Ralph Greenspan, a professor in the UC San Diego Division of Biological Sciences and associate director of KIBM. “Since all brains evolved from a common ancestor, we can draw correspondences between fly and mammalian brain regions based on molecular characteristics and how we store our memories.”

To arrive at the heart of their new findings the researchers created an immersive virtual reality environment to test the fly’s behavior via visual stimulation and coupled the displayed imagery with an infra-red laser as an averse heat stimulus. The near 360-degree panoramic arena allowed Drosophila to flap their wings freely while remaining tethered, and with the virtual reality constantly updating based on their wing movement (analyzed in real-time using high-speed machine-vision cameras) it gave the flies the illusion of flying freely in the world. This gave researchers the ability to train and test flies for conditioning tasks by allowing the insect to orient away from an image associated with the negative heat stimulus and towards a second image not associated with heat.

They tested two variants of conditioning, one in which flies were given visual stimulation overlapping in time with the heat (delay conditioning), both ending together, or a second, trace conditioning, by waiting 5 to 20 seconds to deliver the heat after showing and removing the visual stimulation. The intervening time is considered the “trace” interval during which the fly retains a “trace” of the visual stimulus in its brain, a feature indicative of attention, working memory and conscious awareness in mammals.

The researchers also imaged the brain to track calcium activity in real-time using a fluorescent molecule they genetically engineered into their brain cells. This allowed the researchers to record the formation and duration of the fly’s living memory since they saw the trace blinking on and off while being held in the fly’s short-term (working) memory. They also found that a distraction introduced during training — a gentle puff of air — made the visual memory fade more quickly, marking the first time researchers have been able to prove such distractedness in flies and implicating an attentional requirement in memory formation in Drosophila.

“This work demonstrates not only that flies are capable of this higher form of trace conditioning, and that the learning is distractible just like in mammals and humans, but the neural activity underlying these attentional and working memory processes in the fly show remarkable similarity to those in mammals,” said Dhruv Grover, a UC San Diego KIBM research faculty member and lead author of the new study. “This work demonstrates that fruit flies could serve as a powerful model for the study of higher cognitive functions. Simply put, the fly continues to amaze in how smart it really is.”

The scientists also identified the area of the fly’s brain where the memory formed and faded — an area known as the ellipsoid body of the fly’s central complex, a location that corresponds to the cerebral cortex in the human brain.

Further, the research team discovered that the neurochemical dopamine is required for such learning and higher cognitive functions. The data revealed that dopamine reactions increasingly occurred earlier in the learning process, eventually anticipating the coming heat stimulus.

The researchers are now investigating details of how attention is physiologically encoded in the brain. Grover believes the lessons learned from this model system are likely to directly inform our understanding of human cognition strategies and neural disorders that disrupt them, but also contribute to new engineering approaches that lead to performance breakthroughs in artificial intelligence designs.

The coauthors of the study include Dhruv Grover, Jen-Yung Chen, Jiayun Xie, Jinfang Li, Jean-Pierre Changeux and Ralph Greenspan (all affiliated with the UC San Diego Kavli Institute for Brain and Mind, and J.-P. Changeux also a member of the Collège de France).



Journal Reference:

  1. Dhruv Grover, Jen-Yung Chen, Jiayun Xie, Jinfang Li, Jean-Pierre Changeux, Ralph J. Greenspan. Differential mechanisms underlie trace and delay conditioning in Drosophila. Nature, 2022; DOI: 10.1038/s41586-022-04433-6

Becoming a centaur (Aeon)

Rounding up wild horses on the edge of the Gobi desert in Mongolia, 1964. Photo by Philip Jones Griffiths/Magnum
The horse is a prey animal, the human a predator. Our shared trust and athleticism is a neurobiological miracle

Janet Jones – 14 January 2022

Horse-and-human teams perform complex manoeuvres in competitions of all sorts. Together, we can gallop up to obstacles standing 8 feet (2.4 metres) high, leave the ground, and fly blind – neither party able to see over the top until after the leap has been initiated. Adopting a flatter trajectory with greater speed, horse and human sail over broad jumps up to 27 feet (more than 8 metres) long. We run as one at speeds of 44 miles per hour (nearly 70 km/h), the fastest velocity any land mammal carrying a rider can achieve. In freestyle dressage events, we dance in place to the rhythm of music, trot sideways across the centre of an arena with huge leg-crossing steps, and canter in pirouettes with the horse’s front feet circling her hindquarters. Galloping again, the best horse-and-human teams can slide 65 feet (nearly 20 metres) to a halt while resting all their combined weight on the horse’s hind legs. Endurance races over extremely rugged terrain test horses and riders in journeys that traverse up to 500 miles (805 km) of high-risk adventure.

Charlotte Dujardin on Valegro, a world-record dressage freestyle at London Olympia, 2014: an example of high-precision brain-to-brain communication between horse and rider. Every step the horse takes is determined in conjunction with many invisible cues from his human rider, using a feedback loop between predator brain and prey brain. Note the horse’s beautiful physical condition and complete willingness to perform these extremely difficult manoeuvres.

No one disputes the athleticism fuelling these triumphs, but few people comprehend the mutual cross-species interaction that is required to accomplish them. The average horse weighs 1,200 pounds (more than 540 kg), makes instantaneous movements, and can become hysterical in a heartbeat. Even the strongest human is unable to force a horse to do anything she doesn’t want to do. Nor do good riders allow the use of force in training our magnificent animals. Instead, we hold ourselves to the higher standard of motivating horses to cooperate freely with us in achieving the goals of elite sports as well as mundane chores. Under these conditions, the horse trained with kindness, expertise and encouragement is a willing, equal participant in the action.

That action is rooted in embodied perception and the brain. In mounted teams, horses, with prey brains, and humans, with predator brains, share largely invisible signals via mutual body language. These signals are received and transmitted through peripheral nerves leading to each party’s spinal cord. Upon arrival in each brain, they are interpreted, and a learned response is generated. It, too, is transmitted through the spinal cord and nerves. This collaborative neural action forms a feedback loop, allowing communication from brain to brain in real time. Such conversations allow horse and human to achieve their immediate goals in athletic performance and everyday life. In a very real sense, each species’ mind is extended beyond its own skin into the mind of another, with physical interaction becoming a kind of neural dance.

Horses in nature display certain behaviours that tempt observers to wonder whether competitive manoeuvres truly require mutual communication with human riders. For example, the feral horse occasionally hops over a stream to reach good food or scrambles up a slope of granite to escape predators. These manoeuvres might be thought the precursors to jumping or rugged trail riding. If so, we might imagine that the performance horse’s extreme athletic feats are innate, with the rider merely a passenger steering from above. If that were the case, little requirement would exist for real-time communication between horse and human brains.

In fact, though, the feral hop is nothing like the trained leap over a competition jump, usually commenced from short distances at high speed. Today’s Grand Prix jump course comprises about 15 obstacles set at sharp angles to each other, each more than 5 feet high and more than 6 feet wide (1.5 x 1.8 metres). The horse-and-human team must complete this course in 80 or 90 seconds, a time allowance that makes for acute turns, diagonal flight paths and high-speed exits. Comparing the wilderness hop with the show jump is like associating a flintstone with a nuclear bomb. Horses and riders undergo many years of daily training to achieve this level of performance, and their brains share neural impulses throughout each experience.

These examples originate in elite levels of horse sport, but the same sort of interaction occurs in pastures, arenas and on simple trails all over the world. Any horse-and-human team can develop deep bonds of mutual trust, and learn to communicate using body language, knowledge and empathy.

Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend

The critical component of the horse in nature, and her ability to learn how to interact so precisely with a human rider, is not her physical athleticism but her brain. The first precise magnetic resonance image of a horse’s brain appeared only in 2019, allowing veterinary neurologists far greater insight into the anatomy underlying equine mental function. As this new information is disseminated to horse trainers and riders for practical application, we see the beginnings of a revolution in brain-based horsemanship. Not only will this revolution drive competition to higher summits of success, and animal welfare to more humane levels of understanding, it will also motivate scientists to research the unique compatibility between prey and predator brains. Nowhere else in nature do we see such intense and intimate collaboration between two such disparate minds.

Three natural features of the equine brain are especially important when it comes to mind-melding with humans. First, the horse’s brain provides astounding touch detection. Receptor cells in the horse’s skin and muscles transduce – or convert – external pressure, temperature and body position to neural impulses that the horse’s brain can understand. They accomplish this with exquisite sensitivity: the average horse can detect less pressure against her skin than even a human fingertip can.

Second, horses in nature use body language as a primary medium of daily communication with each other. An alpha mare has only to flick an ear toward a subordinate to get him to move away from her food. A younger subordinate, untutored in the ear flick, receives stronger body language – two flattened ears and a bite that draws blood. The notion of animals in nature as kind, gentle creatures who never hurt each other is a myth.

Third, by nature, the equine brain is a learning machine. Untrammelled by the social and cognitive baggage that human brains carry, horses learn in a rapid, pure form that allows them to be taught the meanings of various human cues that shape equine behaviour in the moment. Taken together, the horse’s exceptional touch sensitivity, natural reliance on body language, and purity of learning form the tripod of support for brain-to-brain communication that is so critical in extreme performance.

One of the reasons for budding scientific fascination with neural horse-and-human communication is the horse’s status as a prey animal. Their brains and bodies evolved to survive completely different pressures than our human physiologies. For example, horse eyes are set on either side of their head for a panoramic view of the world, and their horizontal pupils allow clear sight along the horizon but fuzzy vision above and below. Their eyes rotate to maintain clarity along the horizon when their heads lie sideways to reach grass in odd locations. Equine brains are also hardwired to stream commands directly from the perception of environmental danger to the motor cortex where instant evasion is carried out. All of these features evolved to allow the horse to survive predators.

Conversely, human brains evolved in part for the purpose of predation – hunting, chasing, planning… yes, even killing – with front-facing eyes, superb depth perception, and a prefrontal cortex for strategy and reason. Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend.

The fact that horses and humans can communicate neurally without the external mediation of language or equipment is critical to our ability to initiate the cellular dance between brains. Saddles and bridles are used for comfort and safety, but bareback and bridleless competitions prove they aren’t necessary for highly trained brain-to-brain communication. Scientific efforts to communicate with predators such as dogs and apes have often been hobbled by the use of artificial media including human speech, sign language or symbolic lexigram. By contrast, horses allow us to apply a medium of communication that is completely natural to their lives in the wild and in captivity.

The horse’s prey brain is designed to notice and evade predators. How ironic, and how riveting, then, that this prey brain is the only one today that shares neural communication with a predator brain. It offers humanity a rare view into a prey animal’s world, almost as if we were wolves riding elk or coyotes mind-melding with cottontail bunnies.

Highly trained horses and riders send and receive neural signals using subtle body language. For example, a rider can apply invisible pressure with her left inner calf muscle to move the horse laterally to the right. That pressure is felt on the horse’s side, in his skin and muscle, via proprioceptive receptor cells that detect body position and movement. Then the signal is transduced from mechanical pressure to electrochemical impulse, and conducted up peripheral nerves to the horse’s spinal cord. Finally, it reaches the somatosensory cortex, the region of the brain responsible for interpreting sensory information.

Riders can sometimes guess that an invisible object exists by detecting subtle equine reactions

This interpretation is dependent on the horse’s knowledge that a particular body signal – for example, inward pressure from a rider’s left calf – is associated with a specific equine behaviour. Horse trainers spend years teaching their mounts these associations. In the present example, the horse has learned that this particular amount of pressure, at this speed and location, under these circumstances, means ‘move sideways to the right’. If the horse is properly trained, his motor cortex causes exactly that movement to occur.

By means of our human motion and position sensors, the rider’s brain now senses that the horse has changed his path rightward. Depending on the manoeuvre our rider plans to complete, she will then execute invisible cues to extend or collect the horse’s stride as he approaches a jump that is now centred in his vision, plant his right hind leg and spin in a tight fast circle, push hard off his hindquarters to chase a cow, or any number of other movements. These cues are combined to form that mutual neural dance, occurring in real time, and dependent on natural body language alone.

The example of a horse moving a few steps rightward off the rider’s left leg is extremely simplistic. When you imagine a horse and rider clearing a puissance wall of 7.5 feet (2.4 metres), think of the countless receptor cells transmitting bodily cues between both brains during approach, flight and exit. That is mutual brain-to-brain communication. Horse and human converse via body language to such an extreme degree that they are able to accomplish amazing acts of understanding and athleticism. Each of their minds has extended into the other’s, sending and receiving signals as if one united brain were controlling both bodies.

Franke Sloothaak on Optiebeurs Golo, a world-record puissance jump at Chaudfontaine in Belgium, 1991. This horse-and-human team displays the gentle encouragement that brain-to-brain communication requires. The horse is in perfect condition and health. The rider offers soft, light hands, and rides in perfect balance with the horse. He carries no whip, never uses his spurs, and employs the gentlest type of bit – whose full acceptance is evidenced by the horse’s foamy mouth and flexible neck. The horse is calm but attentive before and after the leap, showing complete willingness to approach the wall without a whiff of coercion. The first thing the rider does upon landing is pat his equine teammate. He strokes or pats the horse another eight times in the next 30 seconds, a splendid example of true horsemanship.

Analysis of brain-to-brain communication between horses and humans elicits several new ideas worthy of scientific notice. Because our minds interact so well using neural networks, horses and humans might learn to borrow neural signals from the party whose brain offers the highest function. For example, horses have a 340-degree range of view when holding their heads still, compared with a paltry 90-degree range in humans. Therefore, horses can see many objects that are invisible to their riders. Yet riders can sometimes guess that an invisible object exists by detecting subtle equine reactions.

Specifically, neural signals from the horse’s eyes carry the shape of an object to his brain. Those signals are transferred to the rider’s brain by a well-established route: equine receptor cells in the retina lead to equine detector cells in the visual cortex, which elicits an equine motor reaction that is then sensed by the rider’s human body. From there, the horse’s neural signals are transmitted up the rider’s spinal cord to the rider’s brain, and a perceptual communication loop is born. The rider’s brain can now respond neurally to something it is incapable of seeing, by borrowing the horse’s superior range of vision.

These brain-to-brain transfers are mutual, so the learning equine brain should also be able to borrow the rider’s vision, with its superior depth perception and focal acuity. This kind of neural interaction results in a horse-and-human team that can sense far more together than either party can detect alone. In effect, they share effort by assigning labour to the party whose skills are superior at a given task.

There is another type of skillset that requires a particularly nuanced cellular dance: sharing attention and focus. Equine vigilance allowed horses to survive 56 million years of evolution – they had to notice slight movements in tall grasses or risk becoming some predator’s dinner. Consequently, today it’s difficult to slip even a tiny change past a horse, especially a young or inexperienced animal who has not yet been taught to ignore certain sights, sounds and smells.

By contrast, humans are much better at concentration than vigilance. The predator brain does not need to notice and react instantly to every stimulus in the environment. In fact, it would be hampered by prey vigilance. While reading this essay, your brain sorts away the sound of traffic past your window, the touch of clothing against your skin, the sight of the masthead that says ‘Aeon’ at the top of this page. Ignoring these distractions allows you to focus on the content of this essay.

Horses and humans frequently share their respective attentional capacities during a performance. A puissance horse galloping toward an enormous wall cannot waste vigilance by noticing the faces of each person in the audience. Likewise, the rider cannot afford to miss a loose dog that runs into the arena outside her narrow range of vision and focus. Each party helps the other through their primary strengths.

Such sharing becomes automatic with practice. With innumerable neural contacts over time, the human brain learns to heed signals sent by the equine brain that say, in effect: ‘Hey, what’s that over there?’ Likewise, the equine brain learns to sense human neural signals that counter: ‘Let’s focus on this gigantic wall right here.’ Each party sends these messages by body language and receives them by body awareness through two spinal cords, then interprets them inside two brains, millisecond by millisecond.

The rider’s physical cues are transmitted by neural activation from the horse’s surface receptors to the horse’s brain

Finally, it is conceivable that horse and rider can learn to share features of executive function – the human brain’s ability to set goals, plan steps to achieve them, assess alternatives, make decisions and evaluate outcomes. Executive function occurs in the prefrontal cortex, an area that does not exist in the equine brain. Horses are excellent at learning, remembering and communicating – but they do not assess, decide, evaluate or judge as humans do.

Shying is a prominent equine behaviour that might be mediated by human executive function in well-trained mounts. When a horse of average size shies away from an unexpected stimulus, riders are sitting on top of 1,200 pounds of muscle that suddenly leaps sideways off all four feet and lands five yards away. It’s a frightening experience, and often results in falls that lead to injury or even death. The horse’s brain causes this reaction automatically by direct connection between his sensory and motor cortices.

Though this possibility must still be studied by rigorous science, brain-to-brain communication suggests that horses might learn to borrow small glimmers of executive function through neural interaction with the human’s prefrontal cortex. Suppose that a horse shies from an umbrella that suddenly opens. By breathing steadily, relaxing her muscles, and flexing her body in rhythm with the horse’s gait, the rider calms the animal using body language. Her physical cues are transmitted by neural activation from his surface receptors to his brain. He responds with body language in which his muscles relax, his head lowers, and his frightened eyes return to their normal size. The rider feels these changes with her body, which transmits the horse’s neural signals to the rider’s brain.

From this point, it’s only a very short step – but an important one – to the transmission and reception of neural signals between the rider’s prefrontal cortex (which evaluates the unexpected umbrella) and the horse’s brain (which instigates the leap away from that umbrella). In practice, to reduce shying, horse trainers teach their young charges to slow their reactions and seek human guidance.

Brain-to-brain communication between horses and riders is an intricate neural dance. These two species, one prey and one predator, are living temporarily in each other’s brains, sharing neural information back and forth in real time without linguistic or mechanical mediation. It is a partnership like no other. Together, a horse-and-human team experiences a richer perceptual and attentional understanding of the world than either member can achieve alone. And, ironically, this extended interspecies mind operates well not because the two brains are similar to each other, but because they are so different.

Janet Jones applies brain research to training horses and riders. She has a PhD from the University of California, Los Angeles, and for 23 years taught the neuroscience of perception, language, memory, and thought. She trained horses at a large stable early in her career, and later ran a successful horse-training business of her own. Her most recent book, Horse Brain, Human Brain (2020), is currently being translated into seven languages.

Edited by Pam Weintraub

What spurs people to save the planet? Stories or facts? (Science Daily)

It depends on whether you’re Republican or Democrat

Date: April 26, 2021

Source: Johns Hopkins University

Summary: With climate change looming, what must people hear to convince them to change their ways to stop harming the environment? A new study finds stories to be significantly more motivating than scientific facts — at least for some people.


With climate change looming, what must people hear to convince them to change their ways to stop harming the environment? A new Johns Hopkins University study finds stories to be significantly more motivating than scientific facts — at least for some people.

After hearing a compelling pollution-related story in which a man died, the average person paid more for green products than after having heard scientific facts about water pollution. But the average person in the study was a Democrat. Republicans paid less after hearing the story rather than the simple facts.

The findings, published this week in the journal One Earth, suggest message framing makes a real difference in people’s actions toward the environment. It also suggests there is no monolithic best way to motivate people and policymakers must work harder to tailor messages for specific audiences.

“Our findings suggest the power of storytelling may be more like preaching to the choir,” said co-author Paul J. Ferraro, an evidence-based environmental policy expert and the Bloomberg Distinguished Professor of Human Behavior and Public Policy at Johns Hopkins.

“For those who are not already leaning toward environmental action, stories might actually make things worse.”

Scientists have little scientific evidence to guide them on how best to communicate with the public about environmental threats. Increasingly, scientists have been encouraged to leave their factual comfort zones and tell more stories that connect with people personally and emotionally. But scientists are reluctant to tell such stories because, for example, no one can point to a deadly flood or a forest fire and conclusively say that the deaths were caused by climate change.

The question researchers hoped to answer with this study: Does storytelling really work to change people’s behavior? And if so, for whom does it work best?

“We said let’s do a horserace between a story and a more typical science-based message and see what actually matters for purchasing behavior,” Ferraro said.

Researchers conducted a field experiment involving just over 1,200 people at an agricultural event in Delaware. Everyone surveyed had lawns or gardens and lived in watershed known to be polluted.

Through a random-price auction, researchers attempted to measure how much participants were willing to pay for products that reduce nutrient pollution. Before people could buy the products, they watched a video with either scientific facts or story about nutrient pollution.

In the story group, participants viewed a true story about a local man’s death that had plausible but tenuous connections to nutrient pollution: he died after eating contaminated shellfish. In the scientific facts group, participants viewed an evidence-based description of the impacts of nutrient pollution on ecosystems and surrounding communities.

After watching the videos, all participants had a chance to purchase products costing less than $10 that could reduce storm water runoff: fertilizer, soil test kits, biochar and soaker hoses.

People who heard the story were on average willing to pay more than those who heard the straight science. But the results skewed greatly when broken down by political party. The story made liberals 17 percent more willing to buy the products, while making conservatives want to spend 14 percent less.

The deep behavioral divide along party lines surprised Ferraro, who typically sees little difference in behavior between Democrats and Republicans when it comes to matters such as energy conservation.

“We hope this study stimulates more work about how to communicate the urgency of climate change and other global environmental challenges,” said lead author Hilary Byerly, a postdoctoral associate at the University of Colorado. “Should the messages come from scientists? And what is it about this type of story that provokes environmental action from Democrats but turns off Republicans?”

This research was supported by contributions from the Penn Foundation, the US Department of Agriculture, The Nature Conservancy, and the National Science Foundation.



Journal Reference:

  1. Hilary Byerly, Paul J. Ferraro, Tongzhe Li, Kent D. Messer, Collin Weigel. A story induces greater environmental contributions than scientific information among liberals but not conservatives. One Earth, 2021; 4 (4): 545 DOI: 10.1016/j.oneear.2021.03.004

How to think about weird things (AEON)

From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer

by Stephen Law

Stephen Law is a philosopher and author. He is director of philosophy at the Department of Continuing Education at the University of Oxford, and editor of Think, the Royal Institute of Philosophy journal. He researches primarily in the fields of philosophy of religion, philosophy of mind, Ludwig Wittgenstein, and essentialism. His books for a popular audience include The Philosophy Gym (2003), The Complete Philosophy Files (2000) and Believing Bullshit (2011). He lives in Oxford.

Edited by Nigel Warburton

10 NOVEMBER 2021

Many people believe in extraordinary hidden beings, including demons, angels, spirits and gods. Plenty also believe in supernatural powers, including psychic abilities, faith healing and communication with the dead. Conspiracy theories are also popular, including that the Holocaust never happened and that the terrorist attacks on the United States of 11 September 2001 were an inside job. And, of course, many trust in alternative medicines such as homeopathy, the effectiveness of which seems to run contrary to our scientific understanding of how the world actually works.

Such beliefs are widely considered to be at the ‘weird’ end of the spectrum. But, of course, just because a belief involves something weird doesn’t mean it’s not true. As science keeps reminding us, reality often is weird. Quantum mechanics and black holes are very weird indeed. So, while ghosts might be weird, that’s no reason to dismiss belief in them out of hand.

I focus here on a particular kind of ‘weird’ belief: not only are these beliefs that concern the enticingly odd, they’re also beliefs that the general public finds particularly difficult to assess.

Almost everyone agrees that, when it comes to black holes, scientists are the relevant experts, and scientific investigation is the right way to go about establishing whether or not they exist. However, when it comes to ghosts, psychic powers or conspiracy theories, we often hold wildly divergent views not only about how reasonable such beliefs are, but also about what might count as strong evidence for or against them, and who the relevant authorities are.

Take homeopathy, for example. Is it reasonable to focus only on what scientists have to say? Shouldn’t we give at least as much weight to the testimony of the many people who claim to have benefitted from homeopathic treatment? While most scientists are sceptical about psychic abilities, what of the thousands of reports from people who claim to have received insights from psychics who could only have known what they did if they really do have some sort of psychic gift? To what extent can we even trust the supposed scientific ‘experts’? Might not the scientific community itself be part of a conspiracy to hide the truth about Area 51 in Nevada, Earth’s flatness or the 9/11 terrorist attacks being an inside job?

Most of us really struggle when it comes to assessing such ‘weird’ beliefs – myself included. Of course, we have our hunches about what’s most likely to be true. But when it comes to pinning down precisely why such beliefs are or aren’t reasonable, even the most intelligent and well educated of us can quickly find ourselves out of our depth. For example, while most would pooh-pooh belief in fairies, Arthur Conan Doyle, the creator of the quintessentially rational detective Sherlock Holmes, actually believed in them and wrote a book presenting what he thought was compelling evidence for their existence.

When it comes to weird beliefs, it’s important we avoid being closed-minded naysayers with our fingers in our ears, but it’s also crucial that we avoid being credulous fools. We want, as far as possible, to be reasonable.

I’m a philosopher who has spent a great deal of time thinking about the reasonableness of such ‘weird’ beliefs. Here I present five key pieces of advice that I hope will help you figure out for yourself what is and isn’t reasonable.

Let’s begin with an illustration of the kind of case that can so spectacularly divide opinion. In 1976, six workers reported a UFO over the site of a nuclear plant being constructed near the town of Apex, North Carolina. A security guard then reported a ‘strange object’. The police officer Ross Denson drove over to investigate and saw what he described as something ‘half the size of the Moon’ hanging over the plant. The police also took a call from local air traffic control about an unidentified blip on their radar.

The next night, the UFO appeared again. The deputy sheriff described ‘a large lighted object’. An auxiliary officer reported five lighted objects that appeared to be burning and about 20 times the size of a passing plane. The county magistrate described a rectangular football-field-sized object that looked like it was on fire.

Finally, the press got interested. Reporters from the Star newspaper drove over to investigate. They too saw the UFO. But when they tried to drive nearer, they discovered that, weirdly, no matter how fast they drove, they couldn’t get any closer.

This report, drawn from Philip J Klass’s book UFOs: The Public Deceived (1983), is impressive: it involves multiple eyewitnesses, including police officers, journalists and even a magistrate. Their testimony is even backed up by hard evidence – that radar blip.

Surely, many would say, given all this evidence, it’s reasonable to believe there was at least something extraordinary floating over the site. Anyone who failed to believe at least that much would be excessively sceptical – one of those perpetual naysayers whose kneejerk reaction, no matter how strong the evidence, is always to pooh-pooh.

What’s most likely to be true: that there really was something extraordinary hanging over the power plant, or that the various eyewitnesses had somehow been deceived? Before we answer, here’s my first piece of advice.NEED TO KNOWTHINK IT THROUGHKEY POINTSWHY IT MATTERSLINKS & BOOKS

Think it through

1. Expect unexplained false sightings and huge coincidences

Our UFO story isn’t over yet. When the Star’s two-man investigative team couldn’t get any closer to the mysterious object, they eventually pulled over. The photographer took out his long lens to take a look: ‘Yep … that’s the planet Venus all right.’ It was later confirmed beyond any reasonable doubt that what all the witnesses had seen was just a planet. But what about that radar blip? It was a coincidence, perhaps caused by a flock of birds or unusual weather.

What moral should we draw from this case? Not, of course, that because this UFO report turned out to have a mundane explanation, all such reports can be similarly dismissed. But notice that, had the reporters not discovered the truth, this story would likely have gone down in the annals of ufology as one of the great unexplained cases. The moral I draw is that UFO cases that have multiple eyewitnesses and even independent hard evidence (the radar blip) may well crop up occasionally anyway, even if there are no alien craft in our skies.

We tend significantly to underestimate how prone to illusion and deception we are when it comes to the wacky and weird. In particular, we have a strong tendency to overdetect agency – to think we are witnessing a person, an alien or some other sort of creature or being – where in truth there’s none.

Psychologists have developed theories to account for this tendency to overdetect agency, including that we have evolved what’s called a hyperactive agency detecting device. Had our ancestors missed an agent – a sabre-toothed tiger or a rival, say – that might well have reduced their chances of surviving and reproducing. Believing an agent is present when it’s not, on the other hand, is likely to be far less costly. Consequently, we’ve evolved to err on the side of overdetection – often seeing agency where there is none. For example, when we observe a movement or pattern we can’t understand, such as the retrograde motion of a planet in the night sky, we’re likely to think the movement is explained by some hidden agent working behind the scenes (that Mars is actually a god, say).

One example of our tendency to overdetect agency is pareidolia: our tendency to find patterns – and, in particular, faces – in random noise. Stare at passing clouds or into the embers of a fire, and it’s easy to interpret the randomly generated shapes we see as faces, often spooky ones, staring back.

And, of course, nature is occasionally going to throw up the face-like patterns just by chance. One famous illustration was produced in 1976 by the Mars probe Viking Orbiter 1. As the probe passed over the Cydonia region, it photographed what appeared to be an enormous, reptilian-looking face 800 feet high and nearly 2 miles long. Some believe this ‘face on Mars’ was a relic of an ancient Martian civilisation, a bit like the Great Sphinx of Giza in Egypt. A book called The Monuments of Mars: A City on the Edge of Forever (1987) even speculated about this lost civilisation. However, later photos revealed the ‘face’ to be just a hill that looks face-like when lit a certain way. Take enough photos of Mars, and some will reveal face-like features just by chance.

The fact is, we should expect huge coincidences. Millions of pieces of bread are toasted each morning. One or two will exhibit face-like patterns just by chance, even without divine intervention. One such piece of toast that was said to show the face of the Virgin Mary (how do we know what she looked like?) was sold for $28,000. We think about so many people each day that eventually we’ll think about someone, the phone will ring, and it will be them. That’s to be expected, even if we’re not psychic. Yet many put down such coincidences to supernatural powers.

2. Understand what strong evidence actually is

When is a claim strongly confirmed by a piece of evidence? The following principle appears correct (it captures part of what confirmation theorists call the Bayes factor; for more on Bayesian approaches to assessing evidence, see the link at the end):

Evidence confirms a claim to the extent that the evidence is more likely if the claim is true than if it’s false.

Here’s a simple illustration. Suppose I’m in the basement and can’t see outside. Jane walks in with a wet coat and umbrella and tells me it’s raining. That’s pretty strong evidence it’s raining. Why? Well, it is of course possible that Jane is playing a prank on me with her wet coat and brolly. But it’s far more likely she would appear with a wet coat and umbrella and tell me it’s raining if that’s true than if it’s false. In fact, given just this new evidence, it may well be reasonable for me to believe it’s raining.

Here’s another example. Sometimes whales and dolphins are found with atavistic limbs – leg-like structures – where legs would be found on land mammals. These discoveries strongly confirm the theory that whales and dolphins evolved from earlier limbed, land-dwelling species. Why? Because, while atavistic limbs aren’t probable given the truth of that theory, they’re still far more probable than they would be if whales and dolphins weren’t the descendants of such limbed creatures.

The Mars face, on the other hand, provides an example of weak or non-existent evidence. Yes, if there was an ancient Martian civilisation, then we might discover what appeared to be a huge face built on the surface of the planet. However, given pareidolia and the likelihood of face-like features being thrown up by chance, it’s about as likely that we would find such face-like features anyway, even if there were no alien civilisation. That’s why such features fail to provide strong evidence for such a civilisation.

So now consider our report of the UFO hanging over the nuclear power construction site. Are several such cases involving multiple witnesses and backed up by some hard evidence (eg, a radar blip) good evidence that there are alien craft in our skies? No. We should expect such hard-to-explain reports anyway, whether or not we’re visited by aliens. In which case, such reports are not strong evidence of alien visitors.

Being sceptical about such reports of alien craft, ghosts or fairies is not knee-jerk, fingers-in-our-ears naysaying. It’s just recognising that, though we might not be able to explain the reports, they’re likely to crop up occasionally anyway, whether or not alien visitors, ghosts or fairies actually exist. Consequently, they fail to provide strong evidence for such beings.

3. Extraordinary claims require extraordinary evidence

It was the scientist Carl Sagan who in 1980 said: ‘Extraordinary claims require extraordinary evidence.’ By an ‘extraordinary’ claim, Sagan appears to have meant an extraordinarily improbable claim, such as that Alice can fly by flapping her arms, or that she can move objects with her mind. On Sagan’s view, such claims require extraordinarily strong evidence before we should accept them – much stronger than the evidence required to support a far less improbable claim.

Suppose for example that Fred claims Alice visited him last night, sat on his sofa and drank a cup of tea. Ordinarily, we would just take Fred’s word for that. But suppose Fred adds that, during her visit, Alice flew around the room by flapping her arms. Of course, we’re not going to just take Fred’s word for that. It’s an extraordinary claim requiring extraordinary evidence.

If we’re starting from a very low base, probability-wise, then much more heavy lifting needs to be done by the evidence to raise the probability of the claim to a point where it might be reasonable to believe it. Clearly, Fred’s testimony about Alice flying around the room is not nearly strong enough.

Similarly, given the low prior probability of the claims that someone communicated with a dead relative, or has fairies living in their local wood, or has miraculously raised someone from the dead, or can move physical objects with their mind, we should similarly set the evidential bar much higher than we would for more mundane claims.

4. Beware accumulated anecdotes

Once we’ve formed an opinion, it can be tempting to notice only evidence that supports it and to ignore the rest. Psychologists call this tendency confirmation bias.

For example, suppose Simon claims a psychic ability to know the future. He can provide 100 examples of his predictions coming true, including one or two dramatic examples. In fact, Simon once predicted that a certain celebrity would die within 12 months, and they did!

Do these 100 examples provide us with strong evidence that Simon really does have some sort of psychic ability? Not if Simon actually made many thousands of predictions and most didn’t come true. Still, if we count only Simon’s ‘hits’ and ignore his ‘misses’, it’s easy to create the impression that he has some sort of ‘gift’.

Confirmation bias can also create the false impression that a therapy is effective. A long list of anecdotes about patients whose condition improved after a faith healing session can seem impressive. People may say: ‘Look at all this evidence! Clearly this therapy has some benefits!’ But the truth is that such accumulated anecdotes are usually largely worthless as evidence.

It’s also worth remembering that such stories are in any case often dubious. For example, they can be generated by the power of suggestion: tell people that a treatment will improve their condition, and many will report that it has, even if the treatment actually offers no genuine medical benefit.

Impressive anecdotes can also be generated by means of a little creative interpretation. Many believe that the 16th-century seer Nostradamus predicted many important historical events, from the Great Fire of London to the assassination of John F Kennedy. However, because Nostradamus’s prophecies are so vague, nobody was able to use his writings to predict any of these events before they occurred. Rather, his texts were later creatively interpreted to fit what subsequently happened. But that sort of ‘fit’ can be achieved whether Nostradamus had extraordinary abilities or not. In which case, as we saw under point 2 above, the ‘fit’ is not strong evidence of such abilities.

5. Beware ‘But it fits!’

Often, when we’re presented with strong evidence that our belief is false, we can easily change our mind. Show me I’m mistaken in believing that the Matterhorn is near Chamonix, and I’ll just drop that belief.

However, abandoning a belief isn’t always so easy. That’s particularly the case for beliefs in which we have invested a great deal emotionally, socially and/or financially. When it comes to religious and political beliefs, for example, or beliefs about the character of our close relatives, we can find it extraordinarily difficult to change our minds. Psychologists refer to the discomfort we feel in such situations – when our beliefs or attitudes are in conflict – as cognitive dissonance.

Perhaps the most obvious strategy we can employ when a belief in which we have invested a great deal is threatened is to start explaining away the evidence.

Here’s an example. Dave believes dogs are spies from the planet Venus – that dogs are Venusian imposters on Earth sending secret reports back to Venus in preparation for their imminent invasion of our planet. Dave’s friends present him with a great deal of evidence that he’s mistaken. But, given a little ingenuity, Dave finds he can always explain away that evidence:

‘Dave, dogs can’t even speak – how can they communicate with Venus?’

‘They can speak, they just hide their linguistic ability from us.’

‘But Dave, dogs don’t have transmitters by which they could relay their messages to Venus – we’ve searched their baskets: nothing there!’

‘Their transmitters are hidden in their brain!’

‘But we’ve X-rayed this dog’s brain – no transmitter!’

‘The transmitters are made from organic material indistinguishable from ordinary brain stuff.’

‘But we can’t detect any signals coming from dogs’ heads.’

‘This is advanced alien technology – beyond our ability to detect it!’

‘Look Dave, Venus can’t support dog life – it’s incredibly hot and swathed in clouds of acid.’

‘The dogs live in deep underground bunkers to protect them. Why do you think they want to leave Venus?!’

You can see how this conversation might continue ad infinitum. No matter how much evidence is presented to Dave, it’s always possible for him to cook up another explanation. And so he can continue to insist his belief is logically consistent with the evidence.

But, of course, despite the possibility of his endlessly explaining away any and all counterevidence, Dave’s belief is absurd. It’s certainly not confirmed by the available evidence about dogs. In fact, it’s powerfully disconfirmed.

The moral is: showing that your theory can be made to ‘fit’ – be consistent with – the evidence is not the same thing as showing your theory is confirmed by the evidence. However, those who hold weird beliefs often muddle consistency and confirmation.

Take young-Earth creationists, for example. They believe in the literal truth of the Biblical account of creation: that the entire Universe is under 10,000 years old, with all species being created as described in the Book of Genesis.

Polls indicate that a third or more of US citizens believe that the Universe is less than 10,000 years old. Of course, there’s a mountain of evidence against the belief. However, its proponents are adept at explaining away that evidence.

Take the fossil record embedded in sedimentary layers revealing that today’s species evolved from earlier species over many millions of years. Many young-Earth creationists explain away this record as a result of the Biblical flood, which they suppose drowned and then buried living things in huge mud deposits. The particular ordering of the fossils is supposedly accounted for by different ecological zones being submerged one after the other, starting with simple marine life. Take a look at the Answers in Genesis website developed by the Bible literalist Ken Ham, and you’ll discover how a great deal of other evidence for evolution and a billions-of-years-old Universe is similarly explained away. Ham believes that, by explaining away the evidence against young-Earth creationism in this way, he can show that his theory ‘fits’ – and so is scientifically confirmed by – that evidence:

Increasing numbers of scientists are realising that when you take the Bible as your basis and build your models of science and history upon it, all the evidence from the living animals and plants, the fossils, and the cultures fits. This confirms that the Bible really is the Word of God and can be trusted totally.
[my italics]

According to Ham, young-Earth creationists and evolutionists do the same thing: they look for ways to make the evidence fit the theory to which they have already committed themselves:

Evolutionists have their own framework … into which they try to fit the data.
[my italics]

But, of course, scientists haven’t just found ways of showing how the theory of evolution can be made consistent with the evidence. As we saw above, that theory really is strongly confirmed by the evidence.

Any theory, no matter how absurd, can, with sufficient ingenuity be made to ‘fit’ the evidence: even Dave’s theory that dogs are Venusian spies. That’s not to say it’s reasonable or well confirmed.

Of course, it’s not always unreasonable to explain away evidence. Given overwhelming evidence that water boils at 100 degrees Celsius at 1 atmosphere, a single experiment that appeared to contradict that claim might reasonably be explained away as a result of some unidentified experimental error. But as we increasingly come to rely on explaining away evidence in order to try to convince ourselves of the reasonableness of our belief, we begin to drift into delusion.

Key points – How to think about weird things

  1. Expect unexplained false sightings and huge coincidences. Reports of mysterious and extraordinary hidden agents – such as angels, demons, spirits and gods – are to be expected, whether or not such beings exist. Huge coincidences – such as a piece of toast looking very face-like – are also more or less inevitable.
  2. Understand what strong evidence is. If the alleged evidence for a belief is scarcely more likely if the belief is true than if it’s false, then it’s not strong evidence.
  3. Extraordinary claims require extraordinary evidence. If a claim is extraordinarily improbable – eg, the claim that Alice flew round the room by flapping her arms – much stronger evidence is required for reasonable belief than is required for belief in a more mundane claim, such as that Alice drank a cup of tea.
  4. Beware accumulated anecdotes. A large number of reports of, say, people recovering after taking an alternative medicine or visiting a faith healer is not strong evidence that such treatments actually work.
  5. Beware ‘But it fits!’ Any theory, no matter how ludicrous (even the theory that dogs are spies from Venus), can, with sufficient ingenuity, always be made logically consistent with the evidence. That’s not to say it’s confirmed by the evidence.

Why it matters

Sometimes, belief in weird things is pretty harmless. What does it matter if Mary believes there are fairies at the bottom of her garden, or Joe thinks his dead aunty visits him occasionally? What does it matter if Sally is a closed-minded naysayer when it comes to belief in psychic powers? However, many of these beliefs have serious consequences.

Clearly, people can be exploited. Grieving parents contact spiritualists who offer to put them in contact with their dead children. Peddlers of alternative medicine and faith healing charge exorbitant fees for their ‘cures’ for terminal illnesses. If some alternative medicines really work, casually dismissing them out of hand and refusing to properly consider the evidence could also cost lives.

Lives have certainly been lost. Many have died who might have been saved because they believed they should reject conventional medicine and opted for ineffective alternatives.

Huge amounts of money are often also at stake when it comes to weird beliefs. Psychic reading and astrology are huge businesses with turnovers of billions of dollars per year. Often, it’s the most desperate who will turn to such businesses for advice. Are they, in reality, throwing their money away?

Many ‘weird’ beliefs also have huge social and political implications. The former US president Ronald Reagan and his wife Nancy were reported to have consulted an astrologer before making any major political decision. Conspiracy theories such as QAnon and the Sandy Hook hoax shape our current political landscape and feed extremist political thinking. Mainstream religions are often committed to miracles and gods.

In short, when it comes to belief in weird things, the stakes can be very high indeed. It matters that we don’t delude ourselves into thinking we’re being reasonable when we’re not.

The Atlantic article ‘The Cognitive Biases Tricking Your Brain’ (2018) by Ben Yagoda provides a great introduction to thinking that can lead us astray, including confirmation bias.

The UK-based magazine The Skeptic provides some high-quality free articles on belief in weird things. Well worth a subscription.

The Skeptical Inquirer magazine in the US is also excellent, and provides some free content.

The RationalWiki portal provides many excellent articles on pseudoscience.

The British mathematician Norman Fenton, professor of risk information management at Queen Mary University of London, provides a brief online introduction to Bayesian approaches to assessing evidence.

My book Believing Bullshit: How Not to Get Sucked into an Intellectual Black Hole (2011) identifies eight tricks of the trade that can turn flaky ideas into psychological flytraps – and how to avoid them.

The textbook How to Think About Weird Things: Critical Thinking for a New Age (2019, 8th ed) by the philosophers Theodore Schick and Lewis Vaughn, offers step-by-step advice on sorting through reasons, evaluating evidence and judging the veracity of a claim.

The book Critical Thinking (2017) by Tom Chatfield offers a toolkit for what he calls ‘being reasonable in an unreasonable world’.

Our brains exist in a state of “controlled hallucination” (MIT Technology Review)

technologyreview.com

Matthew Hutson – August 25, 2021

Three new books lay bare the weirdness of how our brains process the world around us.

Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress. 

Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You, Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.” 

Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge. 

“Chairs aren’t red,” Seth writes, “just as they aren’t ugly or old-fashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.” 

Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.”

Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligence-powered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.” 

Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick.

Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses, Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience:

At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy. 

Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry: 

The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing.

Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to.

The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition, sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.” 

As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly changing, it’s surprising that we can recognize anything at all.”

In What Makes Us Smart, Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.”

“The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes. 

He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going  on inside our own head. As Damji wrote in a letter to Barry: 

When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow.

Matthew Hutson is a contributing writer at The New Yorker and a freelance science and tech writer.

The Mind issue

This story was part of our September 2021 issue

A theory of my own mind (AEON)

Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people

https://pbs.twimg.com/media/D9xE74lW4AEArgC.jpg:large
Tokyo, 1996. Photo by Harry Gruyaert/Magnum

Stephen M Fleming is professor of cognitive neuroscience at University College London, where he leads the Metacognition Group. He is author of Know Thyself: The Science of Self-awareness (2021). Edited by Pam Weintraub

23 September 2021

In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?

In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’

Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:

Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?

Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.

Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.

Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.

Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.

A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.

This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’

An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.

The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’

Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:

Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.

But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’

Self-awareness is something that can be cultivated

Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.

Adelson’s checkerboard. Courtesy Wikipedia

Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.

These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.

Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.

We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.

This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.

This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.

Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?

Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.

A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.

There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.

Clear overlap between brain activations involved in metacognition and mindreading was observed

So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.

Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.

At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.

Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.

Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recent studies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.

Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.

A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.

A simple route for improving self-awareness is to take a third-person perspective on ourselves

Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.

There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:

O wad some Power the giftie gie us
To see oursels as ithers see us!
It wad frae mony a blunder free us…

(Oh, would some Power give us the gift
To see ourselves as others see us!
It would from many a blunder free us )

Why Our Brains Weren’t Made To Deal With Climate Change (NPR)

npr.org

Listen to audio

April 19, 201612:00 AM ET

SHANKAR VEDANTAM, HOST:

This is HIDDEN BRAIN. I’m Shankar Vedantam. Last year, my family and I took a vacation to Alaska. This was a much needed long-planned break. The best part, I got to walk on the top of a glacier.

(SOUNDBITE OF FOOTSTEPS)

VEDANTAM: The pale blue ice was translucent. Sharp ridges opened up into crevices dozens of feet deep. Every geological feature, every hill, every valley was sculpted in ice. It was a sunny day, and I spotted a small stream of melted water. I got on the ground and drank some. I wondered how long this water had remained frozen.

The little stream is not the only ice that’s melting in Alaska. The Mendenhall Glacier, one of the chief tourist attractions in Juneau, has retreated over one and a half miles in the last half-century. Today, you can only see a small sliver of the glacier’s tongue from a lookout. I caught up with John Neary, a forest service official, who tries to explain to visitors the scale of the changes that they’re witnessing.

JOHN NEARY: I would say that right now, we’re looking at a glacier that’s filling up. Out of our 180-degree view we have, we’re looking at maybe 10 or 15 degrees of it, whereas if we stood in this same place 100 years ago, it would have filled up about 160 degrees of our view.

VEDANTAM: You are kidding, 160 degrees of our view.

NEARY: Exactly. That’s the reality of how big this was, and it’s been retreating up this valley at about 40 or 50 feet a year, most recently 400 feet a year. And even more dramatically recently is the thinning and the narrowing as it’s just sort of collapsed in on itself in the bottom of this valley. Instead of dominating much of the valley and being able to see white as a large portion of the landscape, it’s now becoming this little ribbon that’s at the bottom.

VEDANTAM: John is a quiet, soft-spoken man. In recent years, as he’s watched the glacier literally recede before his eyes, he started to speak up, not just about what’s happening but what it means.

But as I was chatting with John, a visitor came up to talk to him. The man said he used to serve in the Air Force and had last seen the Mendenhall Glacier a quarter-century ago. There was a look in the man’s eyes. It was a combination of awe and horror. How could this have happened, the man asked John? Why is this happening?

NEARY: In many ways, people don’t want to grasp the reality. It’s a scary reality to try to grasp. And so what they naturally want to do is assume, well, this has always happened. It will happen in the future, and we’ll survive, won’t we? They want an assurance from me. But I don’t give give it to them. I don’t think it’s my job to give them that assurance.

I think they need to grasp the reality of the fact that we are entering into a time when, yes, glacial advance and retreat has happened 25 different times to North America over its long life but never at the rate and the scale that we see now. And in the very quick rapidity of it means that species probably won’t be able to adapt the way that they have in the past over a longer period of time.

VEDANTAM: To be clear, the Mendenhall Glacier’s retreat in and of itself is not proof of climate change. That evidence comes from a range of scientific measurements and calculations. But the glacier is a visible symbol of the changes that scientists are documenting.

It’s interesting I think when we – people think about climate change, it tends to be an abstract issue most of the time for most people, that you’re standing in front this magnificent glacier right now and to actually see it receding makes it feel real and visceral in a way that it just isn’t when I’m living in Washington, D.C.

NEARY: No, I agree. I think that for too many people, the issue is some Micronesian island that’s having an extra inch of water this year on their shorelines or it’s some polar bears far up in the Arctic that they’re really not connected with.

But when they realize, they come here and they’re on this nice day like we’re experiencing right now with the warm sun and they start to think about this glacier melting and why it’s receding, why it’s disappearing, why it doesn’t look like that photo just 30 years ago up in the visitor’s center, it becomes real for them, and they have to start grapple with the issues behind it.

(SOUNDBITE OF MUSIC)

VEDANTAM: I could see tourists turning these questions over in their minds as they watch the glacier. So even though I had not planned to do any reporting, I started interviewing people using the only device I had available, my phone.

DALE SINGER: I just think it’s a shame that we are losing something pretty precious and pretty different in the world.

VEDANTAM: This is Dale Singer (ph). She and her family came to Alaska on a cruise to celebrate a couple of family birthdays. This was her second trip to Mendenhall.

She came about nine years ago, but the weather was so foggy, she couldn’t get a good look. She felt compelled to come back. I asked Dale why she thought the glacier was retreating.

SINGER: Global warming, whether we like to admit it or not, it’s our fault. Or something we’re doing is affecting climate change.

VEDANTAM: Others are not so sure. For some of Dale’s fellow passengers on her cruise, this is a touchy topic.

SINGER: Somebody just said they went to a lecture and – on the ship, and the lecturer did not use the word global warming nor climate change because he didn’t want to offend passengers. So there are still people who refuse to admit it.

(SOUNDBITE OF MUSIC)

VEDANTAM: As I was standing next to John, one man carefully came up and listened to his account of the science of climate change. When John was done talking, the man told him that he wouldn’t trust scientists as far as he could throw them. Climate change was all about politics, he said.

I asked the man for an interview, but he declined. He said his company had contracts with the federal government. And if bureaucrats in the Obama administration heard his skeptical views on climate change, those contracts might mysteriously disappear. I caught up with another tourist. I asked Michael Bull (ph) if he believed climate change was real.

MICHAEL BULL: No, I think there’s global climate change, but I question whether it’s all due to human interaction with the Earth. Yes, you can’t deny that the climate is changing.

VEDANTAM: Yeah.

BULL: But the causation of that I’m not sold on as being our fault.

VEDANTAM: Michael was worried his tour bus might leave without him, so he answered my question about whether the glacier’s retreat was cause for alarm standing next to the idling bus.

BULL: So what’s the bad part of the glacier receding? And, you know, from what John said to me, if it’s the rate that which – and the Earth can’t adapt, that makes sense to me. But I think the final story is yet to be written.

VEDANTAM: Yeah.

BULL: I think Mother Earth pushes back. So I don’t think we’re going to destroy her because I think she’ll take care of us before we take care of her.

(SOUNDBITE OF MUSIC)

VEDANTAM: Nugget Falls is a beautiful waterfall that empties into Mendenhall Lake. When John first came to Alaska in 1982, the waterfall was adjacent to the glacier. Today, there’s a gap of three-quarters of a mile between the waterfall and the glacier.

SUE SCHULTZ: The glacier has receded unbelievably. It’s quite shocking.

VEDANTAM: This is Sue Schultz. She said she lived in Juneau back in the 1980s. This was her first time back in 28 years. What did it look like 28 years ago?

SCHULTZ: The bare rock that you see to the left as you face the glacier was glacier. And we used to hike on the other side of it. And you could take a trail right onto the glacier.

VEDANTAM: And what about this way? I understand the glacier actually came significantly over to this side…

SCHULTZ: Yes.

VEDANTAM: …Close to Nugget Falls.

SCHULTZ: Yes, it – that’s true. It was really close. In fact, the lake was a lot smaller, obviously (laughter). I mean, yeah, it’s quite incredible.

VEDANTAM: And so what’s your reaction when you see it?

SCHULTZ: Global warming, we need to pay attention.

(SOUNDBITE OF MUSIC)

TERRY LAMBERT: Even if it all melts, it’s not going to be the end of the world, so I’m not worried.

VEDANTAM: Terry Lambert is a tourist from Southern California. He’s never visited Mendenhall before. He thinks the melting glacier is just part of nature’s plan.

LAMBERT: Well, it’s just like earthquakes and floods and hurricanes. They’re all just all part of what’s going on. You can’t control it. You can’t change it. And I personally don’t think it’s something that man’s doing that’s making that melt.

VEDANTAM: I mentioned to Terry some of the possible consequences of climate change on various species. They could be changes. Species could – some species could be advantaged. Some species could be disadvantaged.

The ecosystem is changing. You’re going to have flooding. You’re going to have weather events, right? There could be consequences that affect you and I.

LAMBERT: Yes, but like I say, it’s so far in the future I’m not worried about it.

VEDANTAM: I realized at that moment that the debate over climate change is no longer really about science unless the science you’re talking about is the study of human behavior.

I asked John why he thought so many people were unwilling to accept the scientific consensus that climate change was having real consequences.

NEARY: The inability to do anything about it themselves – because it’s threatening to think about giving up your car, giving up your oil heater in your house or giving up, you know, many of the things that you’ve become accustomed to. They seem very threatening to them.

And, you know, really, I’ve looked at some of the brain science, actually, and talked to folks at NASA and Earth and Sky, and they’ve actually talked about how when that fear becomes overriding for people, they use a part of their brain that’s the very primitive part that has to react.

It has to instantly come to a conclusion so that it can lead to an action, whereas what we need to think about is get rid of that fear and start thinking logically. Start thinking creatively. Allow a different part of the brain to kick in and really think how we as humans can reverse this trend that we’ve caused.

VEDANTAM: Coming up, we explore why the human brain might not be well-designed to grapple with the threat of climate change and what we can do about it. Stay with us.

(SOUNDBITE OF MUSIC)

VEDANTAM: This is HIDDEN BRAIN. I’m Shankar Vedantam. While visiting the Mendenhall Glacier with my family last year, I started thinking more and more about the intersection between climate change and human behavior.

When I got back to Washington, D.C., I called George Marshall. He’s an environmentalist who, like John Neary, tries to educate people about global climate change.

GEORGE MARSHALL: I am the founder of Climate Outreach, and I’m the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.”

VEDANTAM: As the book’s title suggests, George believes that the biggest roadblock in the battle against climate change may lie inside the human brain. I call George at his home in Wales.

(SOUNDBITE OF MUSIC)

VEDANTAM: You’ve spent some time talking with Daniel Kahneman, the famous psychologist who won the Nobel Prize in economics. And he actually presented a very pessimistic view that we would actually come to terms with the threat of climate change.

MARSHALL: He said to me that we are as humans very poor at dealing with issues further in the future. We tend to be very focused on the short term. We tend to discount would be the economic term that – to reduce the value of things happening in the future the further away they are.

He says we’re very cost averse. So that’s to say when there is a reward, we respond strongly. But when there’s a cost, we prefer to push it away just as, you know, I myself would try and leave until the very last minute, you know, filling in my tax return. I mean, it’s just I want to deal with these things. And he says, well, we’re reluctant to deal with uncertainty.

If things aren’t certain, we – or we perceive them to be, we just say, well, come back and tell me when they’re certain. What he said to me was in his view that climate change is the worst possible combination because it’s not only in the future but it’s also in the future and uncertain, and it’s in the future uncertain and involving costs.

And his own experiments – and he’s done many, many of these over the years – show that in this combination, we have a very strong tendency just to push things on one side. And I think this in some ways explains how so many people if you ask them will say, yes, I regard climate change to be a threat.

But if you go and you ask them – and this happens every year in surveys – what are the most important issues, what are the – strangely, almost everybody seems to forget about climate change. So when we focus on it, we know it’s there, but we can somehow push it away.

VEDANTAM: You tell an amusing story in your book about some colleagues who were worried about a cellphone tower being erected in their neighborhood…

MARSHALL: (Laughter).

VEDANTAM: …And the very, very different reaction of these colleagues to the cellphone tower then to it’s sort of the amorphous threat of climate change.

MARSHALL: They were my neighbors, my entire community. I was living at that time in Oxford, which is – many of your listeners know is a university town. So it would be like living in, you know, Harvard or Berkeley or somewhere where most of the people were in various ways involved in the university, highly educated. A mobile phone master is being set up in the middle alongside actually, a school playground, enormous outcry. Everybody mobilized.

Down to the local church hall, they were all going to stop it. People were even going to play lay themselves down in front of a bulldozers to prevent it because it was here. It was now. There was an enemy, which was this external mobile phone company. We’re going to come, and they were going to put up this mast. It brings in the threat psychologists would call the absolute fear of radiation. This is what’s called a dread fear and so on.

Now, the science, if we go back to the core science, says that this mobile phone master is as far as we could possibly say harmless. You know, the amount of radiation or – of any kind you get off a single mobile phone mast has never been found to have the slightest impact on anyone. But they were very mobilized. At the same – oh, thank you for having me on. None of them would come. It simply didn’t have those qualities.

VEDANTAM: You have a very revealing anecdote in your book about the economist Thomas Schelling, who was once in a major traffic jam.

MARSHALL: So Schelling, again, a Nobel prize-winning economist, and he’s wondering what’s going on. The traffic is moving very, very, very slowly, and then they’re creeping along and creeping along, and half an hour along the road, they finally realized what had happened.

But there’s a mattress lying right in the middle of the middle lane of the road. What happens, he notices – and he does the same – is what when they reach the mattress, people will simply drive past it and keep going. In other words, the thing that had caused them to become delayed was not something that anyone was prepared to stop and remove from the road.

They just leave the mattress there, and then they keep driving past. Because in a way, why would they remove that mattress from the road because they have already paid the price of getting there? They’ve already had the delay. It’s something where the benefit goes to other people. The argument being that, of course, it’s very hard, especially when people are motivated largely through personal rewards, to get them to do things.

VEDANTAM: It’s interesting that the same narrative affects the way we talk about climate change internationally. There are many countries who now say, look, you know, I’ve already paid the price. I’m paying the price right now for the actions of other people for the, you know, things that other people have or have not done.

I’m bearing that cost, and you’re asking me now to get out of my car, pull the mattress off the road to bear an additional cost. And the only people who will benefit from that are people who are not me. The collective problems in the end have personal consequences.

MARSHALL: I have to say that the way what one talks about this also shows a way that interpretation is biased by your own politics or your own view. This has been labeled for a long time the tragedy of the commons, the idea being that unless – that people will – if it’s in their own self-interest, destroy the very thing that sustains them because it’s not in their personal interest to do something if they don’t see other people doing it. And in a way, it’s understandable.

But of course, that depends on a view of a world where you see people as being motivated entirely by their own personal rewards. We also know that people are motivated by their sense of identity and their sense of belonging. And we know very well not least of all in times of major conflict or war that people are prepared to make enormous personal sacrifices from which they personally derive nothing except loss, but they’re making that in the interests of the greater good.

For a long time with climate change, we’ve made a mistake of talking about this solely in terms of something which is economic. What are the economic costs, and what are the economic benefits? And we still do this. But of course, really, the motivations for why we want to act on this is what we want to defend the world what we care about and a world we love, and we want to do so for ourselves and for the people who are then to come.

VEDANTAM: So, George, there obviously is one domain in life where you can see people constantly placing these sacred values above their selfish self-interest. You know, I’m thinking here about the many, many religions we have in the world that get people to do all kinds of things that an economist would say is not in their rational self-interest.

People give up food. People give up water. People have, you know, suffer enormous personal privations. People sometimes choose chastity for life, I mean, huge costs No, people are willing to bear. And they’re not doing it because someone says, at the end of the year, I’m going to give you an extra 200 bucks in your paycheck or an extra $2,000 in your paycheck. They’re doing it because they believe these are sacred values that are not negotiable.

MARSHALL: No, well, and not just economists would find those behaviors strange, but Professor Kahneman or kind of pure cognitive psychology might as well because these are people who are struggling with and – but also believe passionately in things which are in the long-term extremely uncertain and require personal cost. And yet people do so.

It’s very important to stress that, you know, when we try and when we talk about climate change and religion that there’s absolutely no sense at all that climate change is or can or should ever be like a religion. It’s not. It’s grounded in science. But we can also learn

I think a great deal from religions about how to approach these issues, these uncertain issues and how to create I think a community of shared belief and shared conviction that something is important.

VEDANTAM: Right. I mean, if you look at sort of human history with sort of the broad view, you know, you don’t actually have to be a religious person to acknowledge that religion has played a very, very important role in the lives of millions of people over thousands of years.

And if it’s done so, then a scientific approach would say, there is something about the nature of religious belief or the practice of religion that harnesses what our brains can accommodate, that they harness our yearning to be part of a tribe, our yearning to be connected to deeper and grander values than ourselves, our yearning in some ways to do things for our fellow person in a way that might not be tangible in the here and now but might actually pay off as you say not just for future generations but even in the hereafter.

MARSHALL: Well, and the faiths that dominate, the half a dozen faiths which are the strongest ones in the world, are the ones that have been best at doing that. There’s a big mistake with climate change because it comes from science, what we assume it just somehow soaks into us.

It’s very clear that just hitting people over the head with more and more and more data and graphs isn’t working. On my Internet feed – I’m on all of the main scientific feeds – there is a new paper every day that says that not only is it bad, but it’s worse than we thought, and it’s extremely, extremely serious, so serious, actually, that we’re finding it very hard to even to find the words to describe it. That doesn’t move people. In fact, actually, it tends to push them away.

However, if we can understand that there are other things which bind us together, I think that we can find yet new language. I think it’s also very important to recognize that the divides that are on climate change are social, not scientific. They’re social and political, that the single biggest determinants of whether you accept it or you don’t accept it is your political values.

And that suggests for the solutions to this are not scientific and maybe psychology. They’re cultural. We have to find ways of saying, sure, you know, we are going to disagree on things politically, but we have things in common that we all care about that are going to have to bring us together.

VEDANTAM: George Marshall is the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.” George, thank you for joining me today on HIDDEN BRAIN.

MARSHALL: You’re very welcome. I enjoyed it. Thank you.

VEDANTAM: The HIDDEN BRAIN podcast is produced by Kara McGuirk-Alison, Maggie Penman and Max Nesterak. Special thanks this week to Daniel Schuken (ph). To continue the conversation about human behavior and climate change, join us on Facebook and Twitter.

If you liked this episode, consider giving us a review on iTunes or wherever you listen to your podcasts so others can find us. I’m Shankar Vedantam, and this is NPR.

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Greater than the sum of our parts: The evolution of collective intelligence (EurekaAlert!)

News Release 15-Jun-2021

University of Cambridge

Research News

The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.

New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.

Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”

The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.

Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”

Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.

The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.

Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”

Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”

At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.

Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.

Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”

“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”

Human Brain Limit of ‘150 Friends’ Doesn’t Check Out, New Study Claims (Science Alert)

Peter Dockrill – 5 MAY 2021


It’s called Dunbar’s number: an influential and oft-repeated theory suggesting the average person can only maintain about 150 stable social relationships with other people.

Proposed by British anthropologist and evolutionary psychologist Robin Dunbar in the early 1990s, Dunbar’s number, extrapolated from research into primate brain sizes and their social groups, has since become a ubiquitous part of the discourse on human social networks.

But just how legitimate is the science behind Dunbar’s number anyway? According to a new analysis by researchers from Stockholm University in Sweden, Dunbar’s famous figure doesn’t add up.

“The theoretical foundation of Dunbar’s number is shaky,” says zoologist and cultural evolution researcher Patrik Lindenfors.

“Other primates’ brains do not handle information exactly as human brains do, and primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are.”

Dunbar’s number was originally predicated on the idea that the volume of the neocortex in primate brains functions as a constraint on the size of the social groups they circulate amongst.

“It is suggested that the number of neocortical neurons limits the organism’s information-processing capacity and that this then limits the number of relationships that an individual can monitor simultaneously,” Dunbar explained in his foundational 1992 study.

“When a group’s size exceeds this limit, it becomes unstable and begins to fragment. This then places an upper limit on the size of groups which any given species can maintain as cohesive social units through time.”

Dunbar began extrapolating the theory to human networks in 1993, and in the decades since has authored and co-authored copious related research output examining the behavioral and cognitive mechanisms underpinning sociality in both humans and other primates.

But as to the original question of whether neocortex size serves as a valid constraint on group size beyond non-human primates, Lindenfors and his team aren’t so sure.

While a number of studies have offered support for Dunbar’s ideas, the new study debunks the claim that neocortex size in primates is equally pertinent to human socialization parameters.

“It is not possible to make an estimate for humans with any precision using available methods and data,” says evolutionary biologist Andreas Wartel.

In their study, the researchers used modern statistical methods including Bayesian and generalized least-squares (GLS) analyses to take another look at the relationship between group size and brain/neocortex sizes in primate brains, with the advantage of updated datasets on primate brains.

The results suggested that stable human group sizes might ultimately be much smaller than 150 individuals – with one analysis suggesting up to 42 individuals could be the average limit, with another estimate ranging between a group of 70 to 107.

Ultimately, however, enormous amounts of imprecision in the statistics suggest that any method like this – trying to compute an average number of stable relationships for any human individual based off brain volume considerations – is unreliable at best.

“Specifying any one number is futile,” the researchers write in their study. “A cognitive limit on human group size cannot be derived in this manner.”

Despite the mainstream attention Dunbar’s number enjoys, the researchers say the majority of primate social evolution research focuses on socio-ecological factors, including foraging and predation, infanticide, and sexual selection – not so much calculations dependent on brain or neocortex volume.

Further, the researchers argue that Dunbar’s number ignores other significant differences in brain physiology between human and non-human primate brains – including that humans develop cultural mechanisms and social structures that can counter socially limiting cognitive factors that might otherwise apply to non-human primates.

“Ecological research on primate sociality, the uniqueness of human thinking, and empirical observations all indicate that there is no hard cognitive limit on human sociality,” the team explains.

“It is our hope, though perhaps futile, that this study will put an end to the use of ‘Dunbar’s number’ within science and in popular media.”

The findings are reported in Biology Letters.

How Facebook got addicted to spreading misinformation (MIT Tech Review)

technologyreview.com

Karen Hao, March 11, 2021


Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

Joaquin Quinonero Candela
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

Joaquin Quinonero Candela
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

Joaquin Quiñonero Candela

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

A former AI researcher who joined in 2018

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

Joaquin Quinonero Candela

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Ellery Roberts Biddle, editorial director of Ranking Digital Rights

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

People with extremist views less able to do complex mental tasks, research suggests (The Guardian)

theguardian.com

Natalie Grover, 22 Feb 2021


Cambridge University team say their findings could be used to spot people at risk from radicalisation
Head jigsaw puzzle
A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto

Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.

Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.

The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.

The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.

Overall, the researchers found that ideological attitudes mirrored cognitive decision-making, according to the study published in the journal Philosophical Transactions of the Royal Society B.

A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.

“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.

She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”

Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.

“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.

In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.

“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”

The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.

The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.

“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”

Hoarding and herding during the COVID-19 pandemic (Science Daily)

The coronavirus pandemic has triggered some interesting and unusual changes in our buying behavior

Date: September 10, 2020

Source: University of Technology Sydney

Summary: Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper.

Rushing to stock up on toilet paper before it vanished from the supermarket isle, stashing cash under the mattress, purchasing a puppy or perhaps planting a vegetable patch — the COVID-19 pandemic has triggered some interesting and unusual changes in our behavior.

Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper published in the Journal of Behavioral Economics for Policy.

‘Hoarding in the age of COVID-19’ by behavioral economist Professor Michelle Baddeley, Deputy Dean of Research at the University of Technology Sydney (UTS) Business School, examines a range of cross-disciplinary explanations for hoarding and other behavior changes observed during the pandemic.

“Understanding these economic, social and psychological responses to COVID-19 can help governments and policymakers adapt their policies to limit negative impacts, and nudge us towards better health and economic outcomes,” says Professor Baddeley.

Governments around the world have implemented behavioral insights units to help guide public policy, and influence public decision-making and compliance.

Hoarding behavior, where people collect or accumulate things such as money or food in excess of their immediate needs, can lead to shortages, or in the case of hoarding cash, have negative impacts on the economy.

“In economics, hoarding is often explored in the context of savings. When consumer confidence is down, spending drops and households increase their savings if they can, because they expect bad times ahead,” explains Professor Baddeley.

“Fear and anxiety also have an impact on financial markets. The VIX ‘fear’ index of financial market volatility saw a dramatic 564% increase between November 2019 and March 2020, as investors rushed to move their money into ‘safe haven’ investments such as bonds.”

While shifts in savings and investments in the face of a pandemic might make economic sense, the hoarding of toilet paper, which also occurred across the globe, is more difficult to explain in traditional economic terms, says Professor Baddeley.

Behavioural economics reveals that our decisions are not always rational or in our long term interest, and can be influenced by a wide range of psychological factors and unconscious biases, particularly in times of uncertainty.

“Evolved instincts dominate in stressful situations, as a response to panic and anxiety. During times of stress and deprivation, not only people but also many animals show a propensity to hoard.”

Another instinct that can come to the fore, particularly in times of stress, is the desire to follow the herd, says Professor Baddeley, whose book ‘Copycats and Contrarians’ explores the concept of herding in greater detail.

“Our propensity to follow others is complex. Some of our reasons for herding are well-reasoned. Herding can be a type of heuristic: a decision-making short-cut that saves us time and cognitive effort,” she says.

“When other people’s choices might be a useful source of information, we use a herding heuristic and follow them because we believe they have good reasons for their actions. We might choose to eat at a busy restaurant because we assume the other diners know it is a good place to eat.

“However numerous experiments from social psychology also show that we can be blindly susceptible to the influence of others. So when we see others rushing to the shops to buy toilet paper, we fear of missing out and follow the herd. It then becomes a self-fulfilling prophesy.”

Behavioral economics also highlights the importance of social conventions and norms in our decision-making processes, and this is where rules can serve an important purpose, says Professor Baddeley.

“Most people are generally law abiding but they might not wear a mask if they think it makes them look like a bit of a nerd, or overanxious. If there is a rule saying you have to wear a mask, this gives people guidance and clarity, and it stops them worrying about what others think.

“So the normative power of rules is very important. Behavioral insights and nudges can then support these rules and policies, to help governments and business prepare for second waves, future pandemics or other global crises.”


Story Source:

Materials provided by University of Technology Sydney. Original written by Leilah Schubert. Note: Content may be edited for style and length.


Journal Reference:

  1. Michelle Baddeley. Hoarding in the age of COVID-19. Journal of Behavioral Economics for Policy, 2020; 4(S): 69-75 [abstract]

The remarkable ways animals understand numbers (BBC Future)

bbc.com

Andreas Nieder, September 7, 2020

(Credit: Press Association)

For some species there is strength and safety in numbers (Credit: Press Association)

Humans as a species are adept at using numbers, but our mathematical ability is something we share with a surprising array of other creatures.

One of the key findings over the past decades is that our number faculty is deeply rooted in our biological ancestry, and not based on our ability to use language. Considering the multitude of situations in which we humans use numerical information, life without numbers is inconceivable.

But what was the benefit of numerical competence for our ancestors, before they became Homo sapiens? Why would animals crunch numbers in the first place?

It turns out that processing numbers offers a significant benefit for survival, which is why this behavioural trait is present in many animal populations. Several studies examining animals in their ecological environments suggest that representing numbers enhances an animal’s ability to exploit food sources, hunt prey, avoid predation, navigate its habitat, and persist in social interactions.

Before numerically competent animals evolved on the planet, single-celled microscopic bacteria – the oldest living organisms on Earth – already exploited quantitative information. The way bacteria make a living is through their consumption of nutrients from their environment. Mostly, they grow and divide themselves to multiply. However, in recent years, microbiologists have discovered they also have a social life and are able to sense the presence or absence of other bacteria. In other words, they can sense the number of bacteria.

Take, for example, the marine bacterium Vibrio fischeri. It has a special property that allows it to produce light through a process called bioluminescence, similar to how fireflies give off light. If these bacteria are in dilute water solutions (where they are essentially alone), they make no light. But when they grow to a certain cell number of bacteria, all of them produce light simultaneously. Therefore, Vibrio fischeri can distinguish when they are alone and when they are together.

Sometimes the numbers don't add up when predators are trying to work out which prey to target (Credit: Alamy)

Sometimes the numbers don’t add up when predators are trying to work out which prey to target (Credit: Alamy)

It turns out they do this using a chemical language. They secrete communication molecules, and the concentration of these molecules in the water increases in proportion to the cell number. And when this molecule hits a certain amount, called a “quorum”, it tells the other bacteria how many neighbours there are, and all the bacteria glow.

This behaviour is called “quorum sensing” – the bacteria vote with signalling molecules, the vote gets counted, and if a certain threshold (the quorum) is reached, every bacterium responds. This behaviour is not just an anomaly of Vibrio fischeri – all bacteria use this sort of quorum sensing to communicate their cell number in an indirect way via signalling molecules.

Remarkably, quorum sensing is not confined to bacteria – animals use it to get around, too. Japanese ants (Myrmecina nipponica), for example, decide to move their colony to a new location if they sense a quorum. In this form of consensus decision making, ants start to transport their brood together with the entire colony to a new site only if a defined number of ants are present at the destination site. Only then, they decide, is it safe to move the colony.

Numerical cognition also plays a vital role when it comes to both navigation and developing efficient foraging strategies. In 2008, biologists Marie Dacke and Mandyam Srinivasan performed an elegant and thoroughly controlled experiment in which they found that bees are able to estimate the number of landmarks in a flight tunnel to reach a food source – even when the spatial layout is changed. Honeybees rely on landmarks to measure the distance of a food source to the hive. Assessing numbers is vital to their survival.

When it comes to optimal foraging, “going for more” is a good rule of thumb in most cases, and seems obvious when you think about it, but sometimes the opposite strategy is favourable. The field mouse loves live ants, but ants are dangerous prey because they bite when threatened. When a field mouse is placed into an arena together with two ant groups of different quantities, then, it surprisingly “goes for less”. In one study, mice that could choose between five versus 15, five versus 30, and 10 versus 30 ants always preferred the smaller quantity of ants. The field mice seem to pick the smaller ant group in order to ensure comfortable hunting and to avoid getting bitten frequently.

Numerical cues play a significant role when it comes to hunting prey in groups, as well. The probability, for example, that wolves capture elk or bison varies with the group size of a hunting party. Wolves often hunt large prey, such as elk and bison, but large prey can kick, gore, and stomp wolves to death. Therefore, there is incentive to “hold back” and let others go in for the kill, particularly in larger hunting parties. As a consequence, wolves have an optimal group size for hunting different prey. For elks, capture success levels off at two to six wolves. However, for bison, the most formidable prey, nine to 13 wolves are the best guarantor of success. Therefore, for wolves, there is “strength in numbers” during hunting, but only up to a certain number that is dependent on the toughness of their prey.

Animals that are more or less defenceless often seek shelter among large groups of social companions – the strength-in-numbers survival strategy hardly needs explaining. But hiding out in large groups is not the only anti-predation strategy involving numerical competence.

In 2005, a team of biologists at the University of Washington found that black-capped chickadees in Europe developed a surprising way to announce the presence and dangerousness of a predator. Like many other animals, chickadees produce alarm calls when they detect a potential predator, such as a hawk, to warn their fellow chickadees. For stationary predators, these little songbirds use their namesake “chick-a-dee” alarm call. It has been shown that the number of “dee” notes at the end of this alarm call indicates the danger level of a predator.

Chickadees produce different numbers of "dee" notes at the end of their call depending on danger they have spotted (Credit: Getty Images)

Chickadees produce different numbers of “dee” notes at the end of their call depending on danger they have spotted (Credit: Getty Images)

A call such as “chick-a-dee-dee” with only two “dee” notes may indicate a rather harmless great grey owl. Great grey owls are too big to manoeuvre and follow the agile chickadees in woodland, so they aren’t a serious threat. In contrast, manoeuvring between trees is no problem for the small pygmy owl, which is why it is one of the most dangerous predators for these small birds. When chickadees see a pygmy owl, they increase the number of “dee” notes and call “chick-a-dee-dee-dee-dee.” Here, the number of sounds serves as an active anti-predation strategy.

Groups and group size also matter if resources cannot be defended by individuals alone – and the ability to assess the number of individuals in one’s own group relative to the opponent party is of clear adaptive value.

Several mammalian species have been investigated in the wild, and the common finding is that numerical advantage determines the outcome of such fights. In a pioneering study, zoologist Karen McComb and co-workers at the University of Sussex investigated the spontaneous behaviour of female lions at the Serengeti National Park when facing intruders. The authors exploited the fact that wild animals respond to vocalisations played through a speaker as though real individuals were present. If the playback sounds like a foreign lion that poses a threat, the lionesses would aggressively approach the speaker as the source of the enemy. In this acoustic playback study, the authors mimicked hostile intrusion by playing the roaring of unfamiliar lionesses to residents.

Two conditions were presented to subjects: either the recordings of single female lions roaring, or of groups of three females roaring together. The researchers were curious to see if the number of attackers and the number of defenders would have an impact on the defender’s strategy. Interestingly, a single defending female was very hesitant to approach the playbacks of a single or three intruders. However, three defenders readily approached the roaring of a single intruder, but not the roaring of three intruders together.

Obviously, the risk of getting hurt when entering a fight with three opponents was foreboding. Only if the number of the residents was five or more did the lionesses approach the roars of three intruders. In other words, lionesses decide to approach intruders aggressively only if they outnumber the latter – another clear example of an animal’s ability to take quantitative information into account.

Our closest cousins in the animal kingdom, the chimpanzees, show a very similar pattern of behaviour. Using a similar playback approach, Michael Wilson and colleagues from Harvard University found that the chimpanzees behaved like military strategists. They intuitively follow equations used by military forces to calculate the relative strengths of opponent parties. In particular, chimpanzees follow predictions made in Lanchester’s “square law” model of combat. This model predicts that, in contests with multiple individuals on each side, chimpanzees in this population should be willing to enter a contest only if they outnumber the opposing side by a factor of at least 1.5. And that is precisely what wild chimps do.

Lionesses judge how many intruders they may be facing before approaching them (Credit: Alamy)

Lionesses judge how many intruders they may be facing before approaching them (Credit: Alamy)

Staying alive – from a biological stance – is a means to an end, and the aim is the transmission of genes. In mealworm beetles (Tenebrio molitor), many males mate with many females, and competition is intense. Therefore, a male beetle will always go for more females in order to maximise his mating opportunities. After mating, males even guard females for some time to prevent further mating acts from other males. The more rivals a male has encountered before mating, the longer he will guard the female after mating.

It is obvious that such behaviour plays an important role in reproduction and therefore has a high adaptive value. Being able to estimate quantity has improved males’ sexual competitiveness. This may in turn be a driving force for more sophisticated cognitive quantity estimation throughout evolution.

One may think that everything is won by successful copulation. But that is far from the truth for some animals, for whom the real prize is fertilising an egg. Once the individual male mating partners have accomplished their part in the play, the sperm continues to compete for the fertilisation of the egg. Since reproduction is of paramount importance in biology, sperm competition causes a variety of adaptations at the behavioural level.

In both insects and vertebrates, the males’ ability to estimate the magnitude of competition determines the size and composition of the ejaculate. In the pseudoscorpion, Cordylochernes scorpioides, for example, it is common that several males copulate with a single female. Obviously, the first male has the best chances of fertilising this female’s egg, whereas the following males face slimmer and slimmer chances of fathering offspring. However, the production of sperm is costly, so the allocation of sperm is weighed considering the chances of fertilising an egg.

Males smell the number of competitor males that have copulated with a female and adjust by progressively decreasing sperm allocation as the number of different male olfactory cues increases from zero to three.

Some bird species, meanwhile, have invented a whole arsenal of trickery to get rid of the burden of parenthood and let others do the job. Breeding a clutch and raising young are costly endeavours, after all. They become brood parasites by laying their eggs in other birds’ nests and letting the host do all the hard work of incubating eggs and feeding hatchlings. Naturally, the potential hosts are not pleased and do everything to avoid being exploited. And one of the defence strategies the potential host has at its disposal is the usage of numerical cues.

American coots, for example, sneak eggs into their neighbours’ nests and hope to trick them into raising the chicks. Of course, their neighbours try to avoid being exploited. A study in the coots’ natural habitat suggests that potential coot hosts can count their own eggs, which helps them to reject parasitic eggs. They typically lay an average-sized clutch of their own eggs, and later reject any surplus parasitic egg. Coots therefore seem to assess the number of their own eggs and ignore any others.

An even more sophisticated type of brood parasitism is found in cowbirds, a songbird species that lives in North America. In this species, females also deposit their eggs in the nests of a variety of host species, from birds as small as kinglets to those as large as meadowlarks, and they have to be smart in order to guarantee that their future young have a bright future.

Cowbird eggs hatch after exactly 12 days of incubation; if incubation is only 11 days, the chicks do not hatch and are lost. It is therefore not an accident that the incubation times for the eggs of the most common hosts range from 11 to 16 days, with an average of 12 days. Host birds usually lay one egg per day – once one day elapses with no egg added by the host to the nest, the host has begun incubation. This means the chicks start to develop in the eggs, and the clock begins ticking. For a cowbird female, it is therefore not only important to find a suitable host, but also to precisely time their egg laying appropriately. If the cowbird lays her egg too early in the host nest, she risks her egg being discovered and destroyed. But if she lays her egg too late, incubation time will have expired before her cowbird chick can hatch.

Female cowbirds perform some incredible mental arithmetic to know when she should lay her eggs in the next of a host bird (Credit: Alamy)

Female cowbirds perform some incredible mental arithmetic to know when she should lay her eggs in the next of a host bird (Credit: Alamy)

Clever experiments by David J White and Grace Freed-Brown from the University of Pennsylvania suggest that cowbird females carefully monitor the host’s clutch to synchronise their parasitism with a potential host’s incubation. The cowbird females watch out for host nests in which the number of eggs has increased since her first visit. This guarantees that the host is still in the laying process and incubation has not yet started. In addition, the cowbird is looking out for nests that contain exactly one additional egg per number of days that have elapsed since her initial visit.

For instance, if the cowbird female visited a nest on the first day and found one host egg in the nest, she will only deposit her own egg if the host nest contains three eggs on the third day. If the nest contains fewer additional eggs than the number of days that have passed since the last visit, she knows that incubation has already started and it is useless for her to lay her own egg. It is incredibly cognitively demanding, since the female cowbird needs to visit a nest over multiple days, remember the clutch size from one day to the next, evaluate the change in the number of eggs in the nest from a past visit to the present, assess the number of days that have passed, and then compare these values to make a decision to lay her egg or not.

But this is not all. Cowbird mothers also have sinister reinforcement strategies. They keep watch on the nests where they’ve laid their eggs. In an attempt to protect their egg, the cowbirds act like mafia gangsters. If the cowbird finds that her egg has been destroyed or removed from the host’s nest, she retaliates by destroying the host bird’s eggs, pecking holes in them or carrying them out of the nest and dropping them on the ground. The host birds better raise the cowbird nestling, or else they have to pay dearly. For the host parents, it may therefore be worth to go through all the trouble of raising a foster chick from an adaptive point of view.

The cowbird is an astounding example of how far evolution has driven some species to stay in the business of passing on their genes. The existing selection pressures, whether imposed by the inanimate environment or by other animals, force populations of species to maintain or increase adaptive traits caused by specific genes. If assessing numbers helps in this struggle to survive and reproduce, it surely is appreciated and relied on.

This explains why numerical competence is so widespread in the animal kingdom: it evolved either because it was discovered by a previous common ancestor and passed on to all descendants, or because it was invented across different branches of the animal tree of life.

Irrespective of its evolutionary origin, one thing is certain – numerical competence is most certainly an adaptive trait.

* This article originally appeared in The MIT Press Reader, and is republished under a Creative Commons licence. Andreas Nieder is Professor of Animal Physiology and Director of the Institute of Neurobiology at the University of Tübingen and the author of A Brain for Numbers, from which this article is adapted.

Exponential growth bias: The numerical error behind Covid-19 (BBC/Future)

A basic mathematical calculation error has fuelled the spread of coronavirus (Credit: Reuters)

Original article

By David Robson – 12th August 2020

A simple mathematical mistake may explain why many people underestimate the dangers of coronavirus, shunning social distancing, masks and hand-washing.

Imagine you are offered a deal with your bank, where your money doubles every three days. If you invest just $1 today, roughly how long will it take for you to become a millionaire?

Would it be a year? Six months? 100 days?

The precise answer is 60 days from your initial investment, when your balance would be exactly $1,048,576. Within a further 30 days, you’d have earnt more than a billion. And by the end of the year, you’d have more than $1,000,000,000,000,000,000,000,000,000,000,000,000 – an “undecillion” dollars.

If your estimates were way out, you are not alone. Many people consistently underestimate how fast the value increases – a mistake known as the “exponential growth bias” – and while it may seem abstract, it may have had profound consequences for people’s behaviour this year.

A spate of studies has shown that people who are susceptible to the exponential growth bias are less concerned about Covid-19’s spread, and less likely to endorse measures like social distancing, hand washing or mask wearing. In other words, this simple mathematical error could be costing lives – meaning that the correction of the bias should be a priority as we attempt to flatten curves and avoid second waves of the pandemic around the world.

To understand the origins of this particular bias, we first need to consider different kinds of growth. The most familiar is “linear”. If your garden produces three apples every day, you have six after two days, nine after three days, and so on.

Exponential growth, by contrast, accelerates over time. Perhaps the simplest example is population growth; the more people you have reproducing, the faster the population grows. Or if you have a weed in your pond that triples each day, the number of plants may start out low – just three on day two, and nine on day three – but it soon escalates (see diagram, below).

Many people assume that coronavirus spreads in a linear fashion, but unchecked it's exponential (Credit: Nigel Hawtin)

Many people assume that coronavirus spreads in a linear fashion, but unchecked it’s exponential (Credit: Nigel Hawtin)

Our tendency to overlook exponential growth has been known for millennia. According to an Indian legend, the brahmin Sissa ibn Dahir was offered a prize for inventing an early version of chess. He asked for one grain of wheat to be placed on the first square on the board, two for the second square, four for the third square, doubling each time up to the 64th square. The king apparently laughed at the humility of ibn Dahir’s request – until his treasurers reported that it would outstrip all the food in the land (18,446,744,073,709,551,615 grains in total).

It was only in the late 2000s that scientists started to study the bias formally, with research showing that most people – like Sissa ibn Dahir’s king – intuitively assume that most growth is linear, leading them to vastly underestimate the speed of exponential increase.

These initial studies were primarily concerned with the consequences for our bank balance. Most savings accounts offer compound interest, for example, where you accrue additional interest on the interest you have already earned. This is a classic example of exponential growth, and it means that even low interest rates pay off handsomely over time. If you have a 5% interest rate, then £1,000 invested today will be worth £1,050 next year, and £1,102.50 the year after… which adds up to more than £7,000 in 40 years’ time. Yet most people don’t recognise how much more bang for their buck they will receive if they start investing early, so they leave themselves short for their retirement.

If the number of grains on a chess board doubled for each square, the 64th would 'hold' 18 quintillion (Credit: Getty Images)

If the number of grains on a chess board doubled for each square, the 64th would ‘hold’ 18 quintillion (Credit: Getty Images)

Besides reducing their savings, the bias also renders people more vulnerable to unfavourable loans, where debt escalates over time. According to one study from 2008, the bias increases someone’s debt-to-income ratio from an average of 23% to an average of 54%.

Surprisingly, a higher level of education does not prevent people from making these errors. Even mathematically trained science students can be vulnerable, says Daniela Sele, who researchs economic decision making at the Swiss Federal Institute of Technology in Zurich. “It does help somewhat, but it doesn’t preclude the bias,” she says.

This may be because they are relying on their intuition rather than deliberative thinking, so that even if they have learned about things like compound interest, they forget to apply them. To make matters worse, most people will confidently report understanding exponential growth but then still fall for the bias when asked to estimate things like compound interest.

As I explored in my book The Intelligence Trap, intelligent and educated people often have a “bias blind spot”, believing themselves to be less susceptible to error than others – and the exponential growth bias appears to fall dead in its centre.

Most people will confidently report understanding exponential growth but then still fall for the bias

It was only this year – at the start of the Covid-19 pandemic – that researchers began to consider whether the bias might also influence our understanding of infectious diseases.

According to various epidemiological studies, without intervention the number of new Covid-19 cases doubles every three to four days, which was the reason that so many scientists advised rapid lockdowns to prevent the pandemic from spiralling out of control.

In March, Joris Lammers at the University of Bremen in Germany joined forces with Jan Crusius and Anne Gast at the University of Cologne to roll out online surveys questioning people about the potential spread of the disease. Their results showed that the exponential growth bias was prevalent in people’s understanding of the virus’s spread, with most people vastly underestimating the rate of increase. More importantly, the team found that those beliefs were directly linked to the participants’ views on the best ways to contain the spread. The worse their estimates, the less likely they were to understand the need for social distancing: the exponential growth bias had made them complacent about the official advice.

The charts that politicians show often fail to communicate exponential growth effectively (Credit: Reuters)

The charts that politicians show often fail to communicate exponential growth effectively (Credit: Reuters)

This chimes with other findings by Ritwik Banerjee and Priyama Majumda at the Indian Institute of Management in Bangalore, and Joydeep Bhattacharya at Iowa State University. In their study (currently under peer-review), they found susceptibility to the exponential growth bias can predict reduced compliance with the World Health Organization’s recommendations – including mask wearing, handwashing, the use of sanitisers and self-isolation.

The researchers speculate that some of the graphical representations found in the media may have been counter-productive. It’s common for the number of infections to be presented on a “logarithmic scale”, in which the figures on the y-axis increase by a power of 10 (so the gap between 1 and 10 is the same as the gap between 10 and 100, or 100 and 1000).

While this makes it easier to plot different regions with low and high growth rates, it means that exponential growth looks more linear than it really is, which could reinforce the exponential growth bias. “To expect people to use the logarithmic scale to extrapolate the growth path of a disease is to demand a very high level of cognitive ability,” the authors told me in an email. In their view, simple numerical tables may actually be more powerful.

Even a small effort to correct this bias could bring huge benefits

The good news is that people’s views are malleable. When Lammers and colleagues reminded the participants of the exponential growth bias, and asked them to calculate the growth in regular steps over a two week period, people hugely improved their estimates of the disease’s spread – and this, in turn, changed their views on social distancing. Sele, meanwhile, has recently shown that small changes in framing can matter. Emphasising the short amount of time that it will take to reach a large number of cases, for instance – and the time that would be gained by social distancing measures – improves people’s understanding of accelerating growth, rather than simply stating the percentage increase each day.

Lammers believes that the exponential nature of the virus needs to be made more salient in coverage of the pandemic. “I think this study shows how media and government should report on a pandemic in such a situation. Not only report the numbers of today and growth over the past week, but also explain what will happen in the next days, week, month, if the same accelerating growth persists,” he says.

He is confident that even a small effort to correct this bias could bring huge benefits. In the US, where the pandemic has hit hardest, it took only a few months for the virus to infect more than five million people, he says. “If we could have overcome the exponential growth bias and had convinced all Americans of this risk back in March, I am sure 99% would have embraced all possible distancing measures.”

David Robson is the author of The Intelligence Trap: Why Smart People Do Dumb Things (WW Norton/Hodder & Stoughton), which examines the psychology of irrational thinking and the best ways to make wiser decisions.

O efeito Dunning-Kruger, ou por que os ignorantes acham que são especialistas (Universo Racionalista)

[A ironia do autor parece indicar que ele não entendeu muito bem o assunto de que trata. Há frases inconsistentes, como “o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo”, por exemplo. RT]

Por Julio Batista – fev 20, 2020

Imagem via Pxhere.

Artigo original em português

Traduzido por Julio Batista
Original de Alexandru Micu no ZME Science

O efeito Dunning-Kruger é um viés cognitivo que foi descrito pela primeira vez no trabalho de David Dunning e Justin Kruger no (agora famoso) estudo de 1999 Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments.

O estudo nasceu baseado em um caso criminal de um rapaz chamado McArthur Wheeler que, em plena luz do dia de 19 de abril de 1995, decidiu roubar dois bancos em Pittsburg, Estados Unidos. Wheeler portava uma arma, mas não uma máscara. Câmeras de vigilância o registraram em flagrante, e a polícia divulgou sua foto nas notícias locais, recebendo várias denúncias de onde ele estava quase que imediatamente.

Um gráfico mostrando o efeito Dunning-Kruger. Imagem adaptada do Wikimedia.

Quando eles foram o prender, o Sr. Wheeler estava visivelmente confuso.

“Mas eu estava coberto de suco”, ele disse, antes que os oficiais o levassem.

Não existe “métodos infalíveis”

Em algum momento de sua vida, Wheeler aprendeu de alguém que o suco de limão poderia ser usado como uma ‘tinta invisível’. Se algo fosse escrito em um pedaço de papel usando suco de limão, você não veria nada – a não ser que você aquecesse o suco, o que tornaria os rabiscos visíveis. Então, naturalmente, ele cobriu seu rosto de suco de limão e foi assaltar um banco, confiante de que sua identidade permaneceria secreta para as câmeras, desde que ele não chegasse perto de nenhuma fonte de calor.

Ainda assim, devemos dar créditos pro sujeito: Wheeler não apostou cegamente. Ele realmente testou sua teoria tirando uma selfie com uma câmera polaroid (existe um cientista dentro de todos nós). Por alguma razão ou outra, talvez porque o filme estava com defeito, não sabemos exatamente o porquê, a câmera revelou uma imagem em branco.

As notícias circularam pelo mundo, todo mundo deu uma boa risada, e o Sr. Wheeler foi levado para a cadeia. A polícia concluiu que ele não era louco, nem usava drogas, ele realmente acreditava que seu plano funcionaria. “Durante sua interação com a polícia, ele ficou incrédulo sobre como sua ignorância havia falhado com ele”, escreveu Anupum Pant para a Awesci.

David Dunning estava trabalhando como psicólogo na Universidade Cornell na época, e a história bizarra chamou sua atenção. Com a ajuda de Justin Kruger, um de seus alunos de pós-graduação, ele começou a entender como o Sr. Wheeler podia estar tão confiante em um plano que era claramente estúpido. A teoria que eles desenvolveram é que quase todos nós consideramos nossas habilidades em determinadas áreas acima da média e que a maioria provavelmente avalia as próprias habilidades como muito melhores do que elas são objetivamente – uma “ilusão de confiança” que sustenta o efeito Dunning-Kruger.

Sejamos todos sem noção

“Cuidado com o vão”… entre como você se vê e como realmente é. Imagem via Pxfuel.

“Se você é incompetente, você não pode saber que é incompetente”, escreveu Dunning no seu livro Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself.

“As habilidades necessárias para produzir uma resposta certa são exatamente as habilidades necessárias para reconhecer o que é uma resposta certa”.

No estudo de 1999 (o primeiro realizado sobre o tópico), a dupla fez uma série de perguntas aos alunos de Cornell sobre gramática, lógica e humor (usadas para medir as habilidades reais dos alunos) e, em seguida, pediu que cada um avaliasse a pontuação geral que eles alcançariam e como suas pontuações se relacionariam às pontuações dos outros participantes. Eles descobriram que os estudantes com a pontuação mais baixa, superestimaram consistente e substancialmente suas próprias capacidades. Os alunos do quartil inferior (25% mais baixos por nota) pensaram que atavam acima de dois terços em média dos outros estudantes (ou seja, que ficaram entre os 33% melhores por pontuação).

Um estudo relacionado realizado pelos mesmo autores em um clube de tiro esportivo mostrou resultados semelhantes. Dunning e Kruger usaram uma metodologia semelhante, fazendo perguntas aos aficionados sobre segurança de armas, visando que estes estimassem a si próprios sobre seus desempenhos no teste. Aqueles que responderam o menor número de perguntas de forma correta também superestimaram demasiadamente seu domínio do conhecimento sobre armas de fogo.

Não é específico apenas às habilidades técnicas, pois afeta todas as esferas da existência humana por igual. Um estudo descobriu que 80% dos motoristas se classificam como acima da média, o que é literalmente impossível, porque não é assim que as médias funcionam. Tendemos a avaliar nossa popularidade relativa da mesma maneira.

Também não se limita a pessoas com habilidades baixas ou inexistentes em um determinado assunto – funciona em praticamente todos nós. Em seu primeiro estudo, Dunning e Kruger também descobriram que os alunos que pontuavam no quartil superior (25%) subestimavam rotineiramente sua própria competência.

Uma definição mais completa do efeito Dunning-Kruger seria que ele representa um viés na estimativa de nossa própria capacidade decorrente de nossa perspectiva limitada. Quando temos uma compreensão ruim ou inexistente sobre um tópico, sabemos literalmente muito pouco para entender o quão pouco sabemos. Aqueles que de fato possuem o conhecimento ou habilidades, no entanto, têm uma ideia muito melhor que as outras pessoas com quem andam. Mas eles também pensam que, se uma tarefa é clara e simples para eles, também deve ser assim para todos os outros.

Uma pessoa no primeiro grupo e uma no segundo grupo são igualmente suscetíveis de usar sua própria experiência como base e tendem a dar como certo que todos estão próximos dessa “base”. Ambos tem “ilusão de confiança” – em um, essa confiança eles tem em si mesmos, e no outro, eles tem em todos as outras pessoas.

Mas talvez não sejamos igualmente sem noção

Errar é humano. Mas, persistir com confiança no erro é hilário.

Dunning e Kruger pareciam encontrar uma saída para o efeito que ajudaram a documentar. Embora todos pareçamos ter a mesma probabilidade de nos iludir, há uma diferença importante entre aqueles que são confiantes, mas incapazes, e aqueles que são capazes e não têm confiança: a forma que lidam e absorvem o feedback ao próprio comportamento.

O Sr. Wheeler tentou verificar sua teoria. No entanto, ele olhou para uma polaroid em branco de uma foto que ele tinha acabado de tirar – um dos grandes motivos que sinalizava que algo não deu muito certo na sua teoria – e não viu motivo para se preocupar; a única explicação que ele aceitou foi que seu plano funcionava. Mais tarde, ele recebeu um feedback da polícia, mas nem isso conseguiu diminuir sua certeza; ele estava “incrédulo em como sua ignorância havia falhado com ele”, mesmo quando ele tinha absoluta confirmação (estando na prisão) de que isso falhou.

Durante sua pesquisa, Dunning e Kruger descobriram que bons alunos previam melhor seu desempenho em exames futuros quando recebessem feedback preciso sobre a pontuação que alcançaram atualmente e sobre sua classificação relativa entre a turma. Os alunos com pior desempenho não mudariam suas expectativas, mesmo após um feedback claro e repetido de que estavam tendo um desempenho ruim. Eles simplesmente insistiram que suas suposições estavam corretas.

Brincadeiras à parte, o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo. Na verdade, serve como uma precaução contra supor que estamos sempre certos e serve pra destacar a importância de manter uma mente aberta e uma visão crítica de nossa própria capacidade.

Mas se você tem medo de ser incompetente, verifique como o feedback afeta sua visão sobre seu próprio trabalho, conhecimento, habilidades e como isso se relaciona com outras pessoas ao seu redor. Se você realmente é um incompetente, não vai mudar de ideia e esse processo é basicamente uma perda de tempo, mas não se preocupe – alguém lhe dirá que você é incompetente.

E você não vai acreditar neles.

Conspiracy theories: how belief is rooted in evolution – not ignorance (The Conversation)

December 13, 2019 9.33am EST – original article

Mikael Klintman PhD, Professor, Lund University

Despite creative efforts to tackle it, belief in conspiracy theories, alternative facts and fake news show no sign of abating. This is clearly a huge problem, as seen when it comes to climate change, vaccines and expertise in general – with anti-scientific attitudes increasingly influencing politics.

So why can’t we stop such views from spreading? My opinion is that we have failed to understand their root causes, often assuming it is down to ignorance. But new research, published in my book, Knowledge Resistance: How We Avoid Insight from Others, shows that the capacity to ignore valid facts has most likely had adaptive value throughout human evolution. Therefore, this capacity is in our genes today. Ultimately, realising this is our best bet to tackle the problem.

So far, public intellectuals have roughly made two core arguments about our post-truth world. The physician Hans Rosling and the psychologist Steven Pinker argue it has come about due to deficits in facts and reasoned thinking – and can therefore be sufficiently tackled with education.

Meanwhile, Nobel Prize winner Richard Thaler and other behavioural economists have shown how the mere provision of more and better facts often lead already polarised groups to become even more polarised in their beliefs.

Tyler Merbler/Flickr, CC BY-SA

The conclusion of Thaler is that humans are deeply irrational, operating with harmful biases. The best way to tackle it is therefore nudging – tricking our irrational brains – for instance by changing measles vaccination from an opt-in to a less burdensome opt-out choice.

Such arguments have often resonated well with frustrated climate scientists, public health experts and agri-scientists (complaining about GMO-opposers). Still, their solutions clearly remain insufficient for dealing with a fact-resisting, polarised society.

Evolutionary pressures

In my comprehensive study, I interviewed numerous eminent academics at the University of Oxford, London School of Economics and King’s College London, about their views. They were experts on social, economic and evolutionary sciences. I analysed their comments in the context of the latest findings on topics raging from the origin of humanity, climate change and vaccination to religion and gender differences.

It became evident that much of knowledge resistance is better understood as a manifestation of social rationality. Essentially, humans are social animals; fitting into a group is what’s most important to us. Often, objective knowledge-seeking can help strengthen group bonding – such as when you prepare a well-researched action plan for your colleagues at work.

But when knowledge and group bonding don’t converge, we often prioritise fitting in over pursuing the most valid knowledge. In one large experiment, it turned out that both liberals and conservatives actively avoided having conversations with people of the other side on issues of drug policy, death penalty and gun ownership. This was the case even when they were offered a chance of winning money if they discussed with the other group. Avoiding the insights from opposing groups helped people dodge having to criticise the view of their own community.

Similarly, if your community strongly opposes what an overwhelming part of science concludes about vaccination or climate change, you often unconsciously prioritise avoiding getting into conflicts about it.

This is further backed up by research showing that the climate deniers who score the highest on scientific literacy tests are more confident than the average in that group that climate change isn’t happening – despite the evidence showing this is the case. And those among the climate concerned who score the highest on the same tests are more confident than the average in that group that climate change is happening.

This logic of prioritising the means that get us accepted and secured in a group we respect is deep. Those among the earliest humans who weren’t prepared to share the beliefs of their community ran the risk of being distrusted and even excluded.

And social exclusion was an enormous increased threat against survival – making them vulnerable to being killed by other groups, animals or by having no one to cooperate with. These early humans therefore had much lower chances of reproducing. It therefore seems fair to conclude that being prepared to resist knowledge and facts is an evolutionary, genetic adaptation of humans to the socially challenging life in hunter-gatherer societies.

Today, we are part of many groups and internet networks, to be sure, and can in some sense “shop around” for new alliances if our old groups don’t like us. Still, humanity today shares the same binary mindset and strong drive to avoid being socially excluded as our ancestors who only knew about a few groups. The groups we are part of also help shape our identity, which can make it hard to change groups. Individuals who change groups and opinions constantly may also be less trusted, even among their new peers.

In my research, I show how this matters when it comes to dealing with fact resistance. Ultimately, we need to take social aspects into account when communicating facts and arguments with various groups. This could be through using role models, new ways of framing problems, new rules and routines in our organisations and new types of scientific narratives that resonate with the intuitions and interests of more groups than our own.

There are no quick fixes, of course. But if climate change were reframed from the liberal/leftist moral perspective of the need for global fairness to conservative perspectives of respect for the authority of the father land, the sacredness of God’s creation and the individual’s right not to have their life project jeopardised by climate change, this might resonate better with conservatives.

If we take social factors into account, this would help us create new and more powerful ways to fight belief in conspiracy theories and fake news. I hope my approach will stimulate joint efforts of moving beyond disputes disguised as controversies over facts and into conversations about what often matters more deeply to us as social beings.

Sente com as entranhas? Seu corpo tem um segundo cérebro dentro da barriga (UOL Saúde)

30/05/201704h00

Tem um segundo cérebro dentro da sua barriga

Tem um segundo cérebro dentro da sua barriga. Getty Images/iStockphoto

Sabe esse seu cérebro aí na cabeça? Ele não é tão único assim não como a gente imagina e conta com uma grande ajuda de um parceiro para controlar nossas emoções, nosso humor e nosso comportamento. Isso porque o corpo humano tem o que muitos chamam de um “segundo cérebro”. E em um lugar bem especial: na nossa barriga.

O “segundo cérebro”, como é chamado informalmente, está situado bem ao longo dos nove metros de seu intestino e reúne milhões de neurônios. Na verdade, faz parte de algo com uma nomenclatura um pouquinho mais complicada: o Sistema Nervoso Entérico.

Getty Images

Dentro do nosso intestino há entre 200 e 600 milhões de neurônios

Funções que até o cérebro duvida

Uma das razões principais para ele ser considerado um cérebro é a grande e complexa rede de neurônios existentes nesse sistema. Para se ter uma ideia, nós temos ali entre 200 milhões e 600 milhões de neurônios, de acordo com pesquisadores da Universidade de Melbourne, na Austrália, que trabalham em conjunto com o cérebro principal.

É como se tivéssemos o cérebro de um gato na nossa barriga. Ele tem 20 diferentes tipos de neurônios, a mesma diversidade encontrada no nosso cérebro grande, onde temos 100 bilhões de neurônios”

Heribert Watzke, cientista alimentar durante em uma palestra na TED Talks

As funções desse cérebro são várias e ocorrem de forma autônoma e integrada ao grande cérebro. Antes, imaginava-se que o cérebro maior enviava sinais para comandar esse outro cérebro, Mas, na verdade, é o contrário: o cérebro em nosso intestino envia sinais por meio de uma grande “rodovia” de neurônios para a cabeça, que pode aceitar ou não as indicações.

“O cérebro de cima pode interferir nesses sinais, modificando-os ou inibindo-os. Há sinais de fome, que nosso estômago vazio envia para o cérebro. Tem sinais que mandam a gente parar de comer quando estamos cheios. Se o sinal da fome é ignorado, pode gerar a doença anorexia, por exemplo. O mais comum é o de continuar comendo, mesmo depois que nossos sinais do estômago dizem ‘ok, pare, transferimos energia suficiente'”, complementa Watzke.

A quantidade de neurônios assusta, mas faz sentido se pensarmos nos perigos da alimentação. Assim como a pele, o intestino tem que parar imediatamente potenciais invasores perigosos em nosso organismo, como bactérias e vírus.

Esse segundo cérebro pode ativar uma diarreia ou alertar o seu “superior”, que pode decidir por acionar vômitos. É um trabalho em grupo e de vital importância.

iStock

Muito além da digestão

É claro que uma das funções principais tem a ver com a nossa digestão e excreção – como se o cérebro maior não quisesse “sujar as mãos”, né? Ele inclusive controla contrações musculares, liberação de substâncias químicas e afins. O segundo cérebro não é usado em funções como pensamentos, religião, filosofia ou poesia, mas está ligado ao nosso humor.

O sistema entérico nervoso nos ajuda a “sentir” nosso mundo interior e seu conteúdo. Segundo a revista Scientific American, é provável que boa parte das nossas emoções sejam influenciadas por causa dos neurônios em nosso intestino.

Já ouviu a expressão “borboletas no estômago”? A sensação é um exemplo disso, como uma resposta a um estresse psicológico.

É por conta disso que algumas pesquisas tentam até tratamento de depressão atuando nos neurônios do intestino. O sistema nervoso entérico tem 95% de nossa serotonina (substância conhecida como uma das responsáveis pela felicidade). Ele pode até ter um papel no autismo.

Há ainda relatos de outras doenças que possam ter a ver com esse segundo cérebro. Um estudo da Nature em 2010 apontou que modificações no funcionamento do sistema podem evitar a osteoporose.

Getty Images

Vida nas entranhas

O “segundo cérebro” tem como uma de suas principais funções a defesa do nosso corpo, já que é um dos grandes responsáveis por controlar nossos anticorpos. Um estudo de 2016 com apoio da Fapesp mostrou como os neurônios se comunicam com as células de defesa no intestino. Há até uma “conversa” com micróbios, já que o sistema nervoso ajuda a ditar quais deles podem habitar o intestino.

Pesquisas apontam que a importância do segundo cérebro é realmente enorme. Em uma delas, foi percebido que ratos recém-nascidos cujos estômagos foram expostos a um químico irritante são mais depressivos e ansiosos do que outros ratos, com os sintomas prosseguindo por um bom tempo depois do dano físico. O mesmo não ocorreu com outros danos, como uma irritação na pele.

Com tudo isso em vista, tenho certeza que você vai olhar para suas vísceras de uma maneira diferente agora, né? Pensa bem: na próxima vez que você estiver estressado ou triste e for comer aquela comida bem gorda para confortar, pode não ser culpa só da sua cabeça.