Tradicionalmente, os gestores elaboram políticas públicas tendo como base um agente econômico racional, ou seja, uma pessoa capaz de avaliar cada decisão, maximizando sua utilidade para interesse próprio. Ignoram, porém, as poderosas influências psicológicas e sociais que afetam o comportamento humano e desconsideram que pessoas são falíveis, inconstantes e emocionais: têm problemas com autocontrole, procrastinam, preferem o status quo e são seres sociais. É com base nesse agente “não tão racional” que as ciências comportamentais se apresentam para complementar a forma tradicional de fazer política.
Por exemplo: já nos aproximamos da marca de dois anos desde a declaração pela Organização Mundial da Saúde de estado de pandemia da Covid-19 em 11 de março de 2020. Foram anos desafiadores para governos, empresas e indivíduos. Mas apesar de 2021 ter apresentado sinais de recuperação, há ainda um longo e árduo caminho a ser percorrido para retornar ao menos às condições pré-pandemia. Não apenas na saúde, mas também no equilíbrio das economias, no aumento da produtividade, na retomada de empregos, na recuperação das lacunas de aprendizagem, na melhora do ambiente de negócios, no combate às mudanças climáticas, etc. Obviamente, essa não é uma tarefa simples para governos e organizações. Poderíamos encarar esses desafios de forma diferente e adaptar a maneira de fazer políticas públicas para torná-las mais eficientes e custo-efetivas, aumentando seus impactos e alcance?
A resposta é sim. O sucesso de políticas públicas depende, em parte, da tomada de decisão e da mudança de comportamentos. Por isso, focar mais nas pessoas e no contexto da tomada de decisão se torna cada vez mais imperativo. É importante considerar como pessoas se relacionam entre si e com instituições, como se portam frente às políticas e conhecer bem o ambiente em que estão inseridas.
A abordagem comportamental é científica e alia conceitos da psicologia, economia, antropologia, sociologia e neurociência. Orientada pelo contexto e baseada em evidências, concilia teoria e prática em diversos setores. Sua aplicação pode abranger uma simples mudança no ambiente da tomada de decisão (arquitetura de escolhas), um “empurrãozinho” (nudge) para influenciar a melhor decisão para o indivíduo, mantendo liberdade de escolhas, e pode ser mais ampla, visando a mudança de hábito. Para além disso, pode ser chave no enfrentamento de desafios de políticas como abandono escolar, violência doméstica e de gênero, pagamento de impostos, redução de corrupção, desastres naturais, mudanças climáticas, entre outros.
O uso de insights comportamentais em políticas públicas já não é mais novidade. Mais de uma década se passou desde a publicação (2008) do livro Nudge (“Nudge: como tomar melhores decisões sobre saúde, dinheiro e felicidade”, em português), que impulsionou o campo de forma espetacular. Conceitos da psicologia, já amplamente discutidos e aceitos por décadas, foram utilizados no contexto das decisões econômicas e, assim, a economia/ciência comportamental se consolidou.
Acompanhando a expansão e relevância do tema, o Banco Mundial, lançou em 2015 o Relatório sobre o Desenvolvimento Mundial: Mente, Sociedade e Comportamento. Em 2016, iniciou sua própria unidade comportamental, a eMBeD (Unidade Mente, Comportamento e Desenvolvimento) e tem promovido o uso sistemático de insights comportamentais em políticas e projetos de desenvolvimento e apoiado diversos países para solucionar problemas de forma rápida e escalável.
No Brasil, temos atuado na capacitação de gestores para o uso de insights comportamentais, em contribuições em pesquisas, como na Pesquisa sobre Ética e Corrupção no Serviço Público Federal (Banco Mundial e CGU) e em apoio técnico na identificação de evidências, como para informar soluções para aumentar a poupança entre a população de baixa renda. Nossos especialistas prepararam também diagnósticos comportamentais para entender por que clientes não pagam a conta em dia ou deixam de se conectar ao sistema de esgoto. Realizamos experimentos com mensagens comportamentais a fim de estimular a utilização de meios digitais de pagamentos e incentivar o pagamento de contas em dia no setor de água e saneamento. Neste último, apresentando resultados positivos com possibilidade de aumento de arrecadação a um custo baixo, já que as mensagens ressaltando consequências e reciprocidade, por exemplo, aumentaram os pagamentos em dia e a quantia total paga. Para cada mil clientes que receberam o SMS com insights comportamentais, de seis a 11 clientes a mais pagaram as contas. Para 2022, há atividades planejadas, como parte de um projeto de desenvolvimento, que usará insights comportamentais para reduzir o descarte de resíduos em sistemas de drenagem e aumentar o uso consciente de espaços públicos.
As ciências comportamentais não são a solução para os grandes desafios globais. Mas é preciso ressaltar o potencial de sua complementariedade na construção de políticas públicas. Cabe aos gestores aproveitarem esse momento de maior maturidade da área para expandirem seus conhecimentos. Vale ainda surfar na onda de ascensão de áreas complementares, como cesign e ciência de dados, para centrar o olhar no indivíduo e no contexto da decisão e, baseando-se em evidências e de maneira transparente, influenciar as escolhas e promover mudança de comportamento, de forma a aumentar o impacto das políticas públicas a fim de não só retomar as condições pré-Covid, mas melhorar ainda mais a vida e o bem-estar de todos, especialmente da população mais pobre e vulnerável.
Esta coluna foi escrita em colaboração com meus colegas do Banco Mundial Juliana Neves Soares Brescianini, analista de operações, e Luis A. Andrés, líder de programa do setor de Infraestrutura.
Untrained, captive orangutans complete major steps in making and using stone tools
Date: February 16, 2022
Source: PLOS
Summary: Untrained, captive orangutans can complete two major steps in the sequence of stone tool use: striking rocks together and cutting using a sharp stone, according to a new study.
Untrained, captive orangutans can complete two major steps in the sequence of stone tool use: striking rocks together and cutting using a sharp stone, according to a study by Alba Motes-Rodrigo at the University of Tübingen in Germany and colleagues, publishing February 16 in the open-access journal PLOS ONE.
The researchers tested tool making and use in two captive male orangutans (Pongo pygmaeus) at Kristiansand Zoo in Norway. Neither had previously been trained or exposed to demonstrations of the target behaviors. Each orangutan was provided with a concrete hammer, a prepared stone core, and two baited puzzle boxes requiring them to cut through a rope or a silicon skin in order to access a food reward. Both orangutans spontaneously hit the hammer against the walls and floor of their enclosure, but neither directed strikes towards the stone core. In a second experiment, the orangutans were also given a human-made sharp flint flake, which one orangutan used to cut the silicon skin, solving the puzzle. This is the first demonstration of cutting behavior in untrained, unenculturated orangutans.
To then investigate whether apes could learn the remaining steps from observing others, the researchers demonstrated how to strike the core to create a flint flake to three female orangutans at Twycross Zoo in the UK. After these demonstrations, one female went on to use the hammer to hit the core, directing the blows towards the edge as demonstrated.
This study is the first to report spontaneous stone tool use without close direction in orangutans that have not been enculturated by humans. The authors say their observations suggest that two major prerequisites for the emergence of stone tool use — striking with stone hammers and recognizing sharp stones as cutting tools — may have existed in our last common ancestor with orangutans, 13 million years ago.
The authors add: “Our study is the first to report that untrained orangutans can spontaneously use sharp stones as cutting tools. We also found that they readily engage in lithic percussion and that this activity occasionally leads to the detachment of sharp stone pieces.”
Journal Reference:
Alba Motes-Rodrigo, Shannon P. McPherron, Will Archer, R. Adriana Hernandez-Aguilar, Claudio Tennie. Experimental investigation of orangutans’ lithic percussive and sharp stone tool behaviours. PLOS ONE, 2022; 17 (2): e0263343 DOI: 10.1371/journal.pone.0263343
Immersive virtual reality and real-time brain activity imaging showcase Drosophila’s capabilities of attention, working memory and awareness
Date: February 17, 2022
Source: University of California – San Diego
Summary: Common flies feature more advanced cognitive abilities than previously believed. Using a custom-built immersive virtual reality arena, neurogenetics and real-time brain activity imaging, researchers found attention, working memory and conscious awareness-like capabilities in fruit flies.
As they annoyingly buzz around a batch of bananas in our kitchens, fruit flies appear to have little in common with mammals. But as a model species for science, researchers are discovering increasing similarities between us and the miniscule fruit-loving insects.
In a new study, researchers at the University of California San Diego’s Kavli Institute for Brain and Mind (KIBM) have found that fruit flies (Drosophila melanogaster) have more advanced cognitive abilities than previously believed. Using a custom-built immersive virtual reality environment, neurogenetic manipulations and in vivo real-time brain-activity imaging, the scientists present new evidence Feb. 16 in the journal Nature of the remarkable links between the cognitive abilities of flies and mammals.
The multi-tiered approach of their investigations found attention, working memory and conscious awareness-like capabilities in fruit flies, cognitive abilities typically only tested in mammals. The researchers were able to watch the formation, distractibility and eventual fading of a memory trace in their tiny brains.
“Despite a lack of obvious anatomical similarity, this research speaks to our everyday cognitive functioning — what we pay attention to and how we do it,” said study senior author Ralph Greenspan, a professor in the UC San Diego Division of Biological Sciences and associate director of KIBM. “Since all brains evolved from a common ancestor, we can draw correspondences between fly and mammalian brain regions based on molecular characteristics and how we store our memories.”
To arrive at the heart of their new findings the researchers created an immersive virtual reality environment to test the fly’s behavior via visual stimulation and coupled the displayed imagery with an infra-red laser as an averse heat stimulus. The near 360-degree panoramic arena allowed Drosophila to flap their wings freely while remaining tethered, and with the virtual reality constantly updating based on their wing movement (analyzed in real-time using high-speed machine-vision cameras) it gave the flies the illusion of flying freely in the world. This gave researchers the ability to train and test flies for conditioning tasks by allowing the insect to orient away from an image associated with the negative heat stimulus and towards a second image not associated with heat.
They tested two variants of conditioning, one in which flies were given visual stimulation overlapping in time with the heat (delay conditioning), both ending together, or a second, trace conditioning, by waiting 5 to 20 seconds to deliver the heat after showing and removing the visual stimulation. The intervening time is considered the “trace” interval during which the fly retains a “trace” of the visual stimulus in its brain, a feature indicative of attention, working memory and conscious awareness in mammals.
The researchers also imaged the brain to track calcium activity in real-time using a fluorescent molecule they genetically engineered into their brain cells. This allowed the researchers to record the formation and duration of the fly’s living memory since they saw the trace blinking on and off while being held in the fly’s short-term (working) memory. They also found that a distraction introduced during training — a gentle puff of air — made the visual memory fade more quickly, marking the first time researchers have been able to prove such distractedness in flies and implicating an attentional requirement in memory formation in Drosophila.
“This work demonstrates not only that flies are capable of this higher form of trace conditioning, and that the learning is distractible just like in mammals and humans, but the neural activity underlying these attentional and working memory processes in the fly show remarkable similarity to those in mammals,” said Dhruv Grover, a UC San Diego KIBM research faculty member and lead author of the new study. “This work demonstrates that fruit flies could serve as a powerful model for the study of higher cognitive functions. Simply put, the fly continues to amaze in how smart it really is.”
The scientists also identified the area of the fly’s brain where the memory formed and faded — an area known as the ellipsoid body of the fly’s central complex, a location that corresponds to the cerebral cortex in the human brain.
Further, the research team discovered that the neurochemical dopamine is required for such learning and higher cognitive functions. The data revealed that dopamine reactions increasingly occurred earlier in the learning process, eventually anticipating the coming heat stimulus.
The researchers are now investigating details of how attention is physiologically encoded in the brain. Grover believes the lessons learned from this model system are likely to directly inform our understanding of human cognition strategies and neural disorders that disrupt them, but also contribute to new engineering approaches that lead to performance breakthroughs in artificial intelligence designs.
The coauthors of the study include Dhruv Grover, Jen-Yung Chen, Jiayun Xie, Jinfang Li, Jean-Pierre Changeux and Ralph Greenspan (all affiliated with the UC San Diego Kavli Institute for Brain and Mind, and J.-P. Changeux also a member of the Collège de France).
Journal Reference:
Dhruv Grover, Jen-Yung Chen, Jiayun Xie, Jinfang Li, Jean-Pierre Changeux, Ralph J. Greenspan. Differential mechanisms underlie trace and delay conditioning in Drosophila. Nature, 2022; DOI: 10.1038/s41586-022-04433-6
Horse-and-human teams perform complex manoeuvres in competitions of all sorts. Together, we can gallop up to obstacles standing 8 feet (2.4 metres) high, leave the ground, and fly blind – neither party able to see over the top until after the leap has been initiated. Adopting a flatter trajectory with greater speed, horse and human sail over broad jumps up to 27 feet (more than 8 metres) long. We run as one at speeds of 44 miles per hour (nearly 70 km/h), the fastest velocity any land mammal carrying a rider can achieve. In freestyle dressage events, we dance in place to the rhythm of music, trot sideways across the centre of an arena with huge leg-crossing steps, and canter in pirouettes with the horse’s front feet circling her hindquarters. Galloping again, the best horse-and-human teams can slide 65 feet (nearly 20 metres) to a halt while resting all their combined weight on the horse’s hind legs. Endurance races over extremely rugged terrain test horses and riders in journeys that traverse up to 500 miles (805 km) of high-risk adventure.
Charlotte Dujardin on Valegro, a world-record dressage freestyle at London Olympia, 2014: an example of high-precision brain-to-brain communication between horse and rider. Every step the horse takes is determined in conjunction with many invisible cues from his human rider, using a feedback loop between predator brain and prey brain. Note the horse’s beautiful physical condition and complete willingness to perform these extremely difficult manoeuvres.
No one disputes the athleticism fuelling these triumphs, but few people comprehend the mutual cross-species interaction that is required to accomplish them. The average horse weighs 1,200 pounds (more than 540 kg), makes instantaneous movements, and can become hysterical in a heartbeat. Even the strongest human is unable to force a horse to do anything she doesn’t want to do. Nor do good riders allow the use of force in training our magnificent animals. Instead, we hold ourselves to the higher standard of motivating horses to cooperate freely with us in achieving the goals of elite sports as well as mundane chores. Under these conditions, the horse trained with kindness, expertise and encouragement is a willing, equal participant in the action.
That action is rooted in embodied perception and the brain. In mounted teams, horses, with prey brains, and humans, with predator brains, share largely invisible signals via mutual body language. These signals are received and transmitted through peripheral nerves leading to each party’s spinal cord. Upon arrival in each brain, they are interpreted, and a learned response is generated. It, too, is transmitted through the spinal cord and nerves. This collaborative neural action forms a feedback loop, allowing communication from brain to brain in real time. Such conversations allow horse and human to achieve their immediate goals in athletic performance and everyday life. In a very real sense, each species’ mind is extended beyond its own skin into the mind of another, with physical interaction becoming a kind of neural dance.
Horses in nature display certain behaviours that tempt observers to wonder whether competitive manoeuvres truly require mutual communication with human riders. For example, the feral horse occasionally hops over a stream to reach good food or scrambles up a slope of granite to escape predators. These manoeuvres might be thought the precursors to jumping or rugged trail riding. If so, we might imagine that the performance horse’s extreme athletic feats are innate, with the rider merely a passenger steering from above. If that were the case, little requirement would exist for real-time communication between horse and human brains.
In fact, though, the feral hop is nothing like the trained leap over a competition jump, usually commenced from short distances at high speed. Today’s Grand Prix jump course comprises about 15 obstacles set at sharp angles to each other, each more than 5 feet high and more than 6 feet wide (1.5 x 1.8 metres). The horse-and-human team must complete this course in 80 or 90 seconds, a time allowance that makes for acute turns, diagonal flight paths and high-speed exits. Comparing the wilderness hop with the show jump is like associating a flintstone with a nuclear bomb. Horses and riders undergo many years of daily training to achieve this level of performance, and their brains share neural impulses throughout each experience.
These examples originate in elite levels of horse sport, but the same sort of interaction occurs in pastures, arenas and on simple trails all over the world. Any horse-and-human team can develop deep bonds of mutual trust, and learn to communicate using body language, knowledge and empathy.
Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend
The critical component of the horse in nature, and her ability to learn how to interact so precisely with a human rider, is not her physical athleticism but her brain. The first precise magnetic resonance image of a horse’s brain appeared only in 2019, allowing veterinary neurologists far greater insight into the anatomy underlying equine mental function. As this new information is disseminated to horse trainers and riders for practical application, we see the beginnings of a revolution in brain-based horsemanship. Not only will this revolution drive competition to higher summits of success, and animal welfare to more humane levels of understanding, it will also motivate scientists to research the unique compatibility between prey and predator brains. Nowhere else in nature do we see such intense and intimate collaboration between two such disparate minds.
Three natural features of the equine brain are especially important when it comes to mind-melding with humans. First, the horse’s brain provides astounding touch detection. Receptor cells in the horse’s skin and muscles transduce – or convert – external pressure, temperature and body position to neural impulses that the horse’s brain can understand. They accomplish this with exquisite sensitivity: the average horse can detect less pressure against her skin than even a human fingertip can.
Second, horses in nature use body language as a primary medium of daily communication with each other. An alpha mare has only to flick an ear toward a subordinate to get him to move away from her food. A younger subordinate, untutored in the ear flick, receives stronger body language – two flattened ears and a bite that draws blood. The notion of animals in nature as kind, gentle creatures who never hurt each other is a myth.
Third, by nature, the equine brain is a learningmachine. Untrammelled by the social and cognitive baggage that human brains carry, horses learn in a rapid, pure form that allows them to be taught the meanings of various human cues that shape equine behaviour in the moment. Taken together, the horse’s exceptional touch sensitivity, natural reliance on body language, and purity of learning form the tripod of support for brain-to-brain communication that is so critical in extreme performance.
One of the reasons for budding scientific fascination with neural horse-and-human communication is the horse’s status as a prey animal. Their brains and bodies evolved to survive completely different pressures than our human physiologies. For example, horse eyes are set on either side of their head for a panoramic view of the world, and their horizontal pupils allow clear sight along the horizon but fuzzy vision above and below. Their eyes rotate to maintain clarity along the horizon when their heads lie sideways to reach grass in odd locations. Equine brains are also hardwired to stream commands directly from the perception of environmental danger to the motor cortex where instant evasion is carried out. All of these features evolved to allow the horse to survive predators.
Conversely, human brains evolved in part for the purpose of predation – hunting, chasing, planning… yes, even killing – with front-facing eyes, superb depth perception, and a prefrontal cortex for strategy and reason. Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend.
The fact that horses and humans can communicate neurally without the external mediation of language or equipment is critical to our ability to initiate the cellular dance between brains. Saddles and bridles are used for comfort and safety, but bareback and bridleless competitions prove they aren’t necessary for highly trained brain-to-brain communication. Scientific efforts to communicate with predators such as dogs and apes have often been hobbled by the use of artificial media including human speech, sign language or symbolic lexigram. By contrast, horses allow us to apply a medium of communication that is completely natural to their lives in the wild and in captivity.
The horse’s prey brain is designed to notice and evade predators. How ironic, and how riveting, then, that this prey brain is the only one today that shares neural communication with a predator brain. It offers humanity a rare view into a prey animal’s world, almost as if we were wolves riding elk or coyotes mind-melding with cottontail bunnies.
Highly trained horses and riders send and receive neural signals using subtle body language. For example, a rider can apply invisible pressure with her left inner calf muscle to move the horse laterally to the right. That pressure is felt on the horse’s side, in his skin and muscle, via proprioceptive receptor cells that detect body position and movement. Then the signal is transduced from mechanical pressure to electrochemical impulse, and conducted up peripheral nerves to the horse’s spinal cord. Finally, it reaches the somatosensory cortex, the region of the brain responsible for interpreting sensory information.
Riders can sometimes guess that an invisible object exists by detecting subtle equine reactions
This interpretation is dependent on the horse’s knowledge that a particular body signal – for example, inward pressure from a rider’s left calf – is associated with a specific equine behaviour. Horse trainers spend years teaching their mounts these associations. In the present example, the horse has learned that this particular amount of pressure, at this speed and location, under these circumstances, means ‘move sideways to the right’. If the horse is properly trained, his motor cortex causes exactly that movement to occur.
By means of our human motion and position sensors, the rider’s brain now senses that the horse has changed his path rightward. Depending on the manoeuvre our rider plans to complete, she will then execute invisible cues to extend or collect the horse’s stride as he approaches a jump that is now centred in his vision, plant his right hind leg and spin in a tight fast circle, push hard off his hindquarters to chase a cow, or any number of other movements. These cues are combined to form that mutual neural dance, occurring in real time, and dependent on natural body language alone.
The example of a horse moving a few steps rightward off the rider’s left leg is extremely simplistic. When you imagine a horse and rider clearing a puissance wall of 7.5 feet (2.4 metres), think of the countless receptor cells transmitting bodily cues between both brains during approach, flight and exit. That is mutual brain-to-brain communication. Horse and human converse via body language to such an extreme degree that they are able to accomplish amazing acts of understanding and athleticism. Each of their minds has extended into the other’s, sending and receiving signals as if one united brain were controlling both bodies.
Franke Sloothaak on Optiebeurs Golo, a world-record puissance jump at Chaudfontaine in Belgium, 1991. This horse-and-human team displays the gentle encouragement that brain-to-brain communication requires. The horse is in perfect condition and health. The rider offers soft, light hands, and rides in perfect balance with the horse. He carries no whip, never uses his spurs, and employs the gentlest type of bit – whose full acceptance is evidenced by the horse’s foamy mouth and flexible neck. The horse is calm but attentive before and after the leap, showing complete willingness to approach the wall without a whiff of coercion. The first thing the rider does upon landing is pat his equine teammate. He strokes or pats the horse another eight times in the next 30 seconds, a splendid example of true horsemanship.
Analysis of brain-to-brain communication between horses and humans elicits several new ideas worthy of scientific notice. Because our minds interact so well using neural networks, horses and humans might learn to borrow neural signals from the party whose brain offers the highest function. For example, horses have a 340-degree range of view when holding their heads still, compared with a paltry 90-degree range in humans. Therefore, horses can see many objects that are invisible to their riders. Yet riders can sometimes guess that an invisible object exists by detecting subtle equine reactions.
Specifically, neural signals from the horse’s eyes carry the shape of an object to his brain. Those signals are transferred to the rider’s brain by a well-established route: equine receptor cells in the retina lead to equine detector cells in the visual cortex, which elicits an equine motor reaction that is then sensed by the rider’s human body. From there, the horse’s neural signals are transmitted up the rider’s spinal cord to the rider’s brain, and a perceptual communication loop is born. The rider’s brain can now respond neurally to something it is incapable of seeing, by borrowing the horse’s superior range of vision.
These brain-to-brain transfers are mutual, so the learning equine brain should also be able to borrow the rider’s vision, with its superior depth perception and focal acuity. This kind of neural interaction results in a horse-and-human team that can sense far more together than either party can detect alone. In effect, they share effort by assigning labour to the party whose skills are superior at a given task.
There is another type of skillset that requires a particularly nuanced cellular dance: sharing attention and focus. Equine vigilance allowed horses to survive 56 million years of evolution – they had to notice slight movements in tall grasses or risk becoming some predator’s dinner. Consequently, today it’s difficult to slip even a tiny change past a horse, especially a young or inexperienced animal who has not yet been taught to ignore certain sights, sounds and smells.
By contrast, humans are much better at concentration than vigilance. The predator brain does not need to notice and react instantly to every stimulus in the environment. In fact, it would be hampered by prey vigilance. While reading this essay, your brain sorts away the sound of traffic past your window, the touch of clothing against your skin, the sight of the masthead that says ‘Aeon’ at the top of this page. Ignoring these distractions allows you to focus on the content of this essay.
Horses and humans frequently share their respective attentional capacities during a performance. A puissance horse galloping toward an enormous wall cannot waste vigilance by noticing the faces of each person in the audience. Likewise, the rider cannot afford to miss a loose dog that runs into the arena outside her narrow range of vision and focus. Each party helps the other through their primary strengths.
Such sharing becomes automatic with practice. With innumerable neural contacts over time, the human brain learns to heed signals sent by the equine brain that say, in effect: ‘Hey, what’s that over there?’ Likewise, the equine brain learns to sense human neural signals that counter: ‘Let’s focus on this gigantic wall right here.’ Each party sends these messages by body language and receives them by body awareness through two spinal cords, then interprets them inside two brains, millisecond by millisecond.
The rider’s physical cues are transmitted by neural activation from the horse’s surface receptors to the horse’s brain
Finally, it is conceivable that horse and rider can learn to share features of executive function – the human brain’s ability to set goals, plan steps to achieve them, assess alternatives, make decisions and evaluate outcomes. Executive function occurs in the prefrontal cortex, an area that does not exist in the equine brain. Horses are excellent at learning, remembering and communicating – but they do not assess, decide, evaluate or judge as humans do.
Shying is a prominent equine behaviour that might be mediated by human executive function in well-trained mounts. When a horse of average size shies away from an unexpected stimulus, riders are sitting on top of 1,200 pounds of muscle that suddenly leaps sideways off all four feet and lands five yards away. It’s a frightening experience, and often results in falls that lead to injury or even death. The horse’s brain causes this reaction automatically by direct connection between his sensory and motor cortices.
Though this possibility must still be studied by rigorous science, brain-to-brain communication suggests that horses might learn to borrow small glimmers of executive function through neural interaction with the human’s prefrontal cortex. Suppose that a horse shies from an umbrella that suddenly opens. By breathing steadily, relaxing her muscles, and flexing her body in rhythm with the horse’s gait, the rider calms the animal using body language. Her physical cues are transmitted by neural activation from his surface receptors to his brain. He responds with body language in which his muscles relax, his head lowers, and his frightened eyes return to their normal size. The rider feels these changes with her body, which transmits the horse’s neural signals to the rider’s brain.
From this point, it’s only a very short step – but an important one – to the transmission and reception of neural signals between the rider’s prefrontal cortex (which evaluates the unexpected umbrella) and the horse’s brain (which instigates the leap away from that umbrella). In practice, to reduce shying, horse trainers teach their young charges to slow their reactions and seek human guidance.
Brain-to-brain communication between horses and riders is an intricate neural dance. These two species, one prey and one predator, are living temporarily in each other’s brains, sharing neural information back and forth in real time without linguistic or mechanical mediation. It is a partnership like no other. Together, a horse-and-human team experiences a richer perceptual and attentional understanding of the world than either member can achieve alone. And, ironically, this extended interspecies mind operates well not because the two brains are similar to each other, but because they are so different.
Janet Jones applies brain research to training horses and riders. She has a PhD from the University of California, Los Angeles, and for 23 years taught the neuroscience of perception, language, memory, and thought. She trained horses at a large stable early in her career, and later ran a successful horse-training business of her own. Her most recent book, Horse Brain, Human Brain (2020), is currently being translated into seven languages.
It depends on whether you’re Republican or Democrat
Date: April 26, 2021
Source: Johns Hopkins University
Summary: With climate change looming, what must people hear to convince them to change their ways to stop harming the environment? A new study finds stories to be significantly more motivating than scientific facts — at least for some people.
With climate change looming, what must people hear to convince them to change their ways to stop harming the environment? A new Johns Hopkins University study finds stories to be significantly more motivating than scientific facts — at least for some people.
After hearing a compelling pollution-related story in which a man died, the average person paid more for green products than after having heard scientific facts about water pollution. But the average person in the study was a Democrat. Republicans paid less after hearing the story rather than the simple facts.
The findings, published this week in the journal One Earth, suggest message framing makes a real difference in people’s actions toward the environment. It also suggests there is no monolithic best way to motivate people and policymakers must work harder to tailor messages for specific audiences.
“Our findings suggest the power of storytelling may be more like preaching to the choir,” said co-author Paul J. Ferraro, an evidence-based environmental policy expert and the Bloomberg Distinguished Professor of Human Behavior and Public Policy at Johns Hopkins.
“For those who are not already leaning toward environmental action, stories might actually make things worse.”
Scientists have little scientific evidence to guide them on how best to communicate with the public about environmental threats. Increasingly, scientists have been encouraged to leave their factual comfort zones and tell more stories that connect with people personally and emotionally. But scientists are reluctant to tell such stories because, for example, no one can point to a deadly flood or a forest fire and conclusively say that the deaths were caused by climate change.
The question researchers hoped to answer with this study: Does storytelling really work to change people’s behavior? And if so, for whom does it work best?
“We said let’s do a horserace between a story and a more typical science-based message and see what actually matters for purchasing behavior,” Ferraro said.
Researchers conducted a field experiment involving just over 1,200 people at an agricultural event in Delaware. Everyone surveyed had lawns or gardens and lived in watershed known to be polluted.
Through a random-price auction, researchers attempted to measure how much participants were willing to pay for products that reduce nutrient pollution. Before people could buy the products, they watched a video with either scientific facts or story about nutrient pollution.
In the story group, participants viewed a true story about a local man’s death that had plausible but tenuous connections to nutrient pollution: he died after eating contaminated shellfish. In the scientific facts group, participants viewed an evidence-based description of the impacts of nutrient pollution on ecosystems and surrounding communities.
After watching the videos, all participants had a chance to purchase products costing less than $10 that could reduce storm water runoff: fertilizer, soil test kits, biochar and soaker hoses.
People who heard the story were on average willing to pay more than those who heard the straight science. But the results skewed greatly when broken down by political party. The story made liberals 17 percent more willing to buy the products, while making conservatives want to spend 14 percent less.
The deep behavioral divide along party lines surprised Ferraro, who typically sees little difference in behavior between Democrats and Republicans when it comes to matters such as energy conservation.
“We hope this study stimulates more work about how to communicate the urgency of climate change and other global environmental challenges,” said lead author Hilary Byerly, a postdoctoral associate at the University of Colorado. “Should the messages come from scientists? And what is it about this type of story that provokes environmental action from Democrats but turns off Republicans?”
This research was supported by contributions from the Penn Foundation, the US Department of Agriculture, The Nature Conservancy, and the National Science Foundation.
Journal Reference:
Hilary Byerly, Paul J. Ferraro, Tongzhe Li, Kent D. Messer, Collin Weigel. A story induces greater environmental contributions than scientific information among liberals but not conservatives. One Earth, 2021; 4 (4): 545 DOI: 10.1016/j.oneear.2021.03.004
From discs in the sky to faces in toast, learn to weigh evidence sceptically without becoming a closed-minded naysayer
by Stephen Law
Stephen Law is a philosopher and author. He is director of philosophy at the Department of Continuing Education at the University of Oxford, and editor of Think, the Royal Institute of Philosophy journal. He researches primarily in the fields of philosophy of religion, philosophy of mind, Ludwig Wittgenstein, and essentialism. His books for a popular audience include The Philosophy Gym (2003), The Complete Philosophy Files (2000) and Believing Bullshit (2011). He lives in Oxford.
Many people believe in extraordinary hidden beings, including demons, angels, spirits and gods. Plenty also believe in supernatural powers, including psychic abilities, faith healing and communication with the dead. Conspiracy theories are also popular, including that the Holocaust never happened and that the terrorist attacks on the United States of 11 September 2001 were an inside job. And, of course, many trust in alternative medicines such as homeopathy, the effectiveness of which seems to run contrary to our scientific understanding of how the world actually works.
Such beliefs are widely considered to be at the ‘weird’ end of the spectrum. But, of course, just because a belief involves something weird doesn’t mean it’s not true. As science keeps reminding us, reality often is weird. Quantum mechanics and black holes are very weird indeed. So, while ghosts might be weird, that’s no reason to dismiss belief in them out of hand.
I focus here on a particular kind of ‘weird’ belief: not only are these beliefs that concern the enticingly odd, they’re also beliefs that the general public finds particularly difficult to assess.
Almost everyone agrees that, when it comes to black holes, scientists are the relevant experts, and scientific investigation is the right way to go about establishing whether or not they exist. However, when it comes to ghosts, psychic powers or conspiracy theories, we often hold wildly divergent views not only about how reasonable such beliefs are, but also about what might count as strong evidence for or against them, and who the relevant authorities are.
Take homeopathy, for example. Is it reasonable to focus only on what scientists have to say? Shouldn’t we give at least as much weight to the testimony of the many people who claim to have benefitted from homeopathic treatment? While most scientists are sceptical about psychic abilities, what of the thousands of reports from people who claim to have received insights from psychics who could only have known what they did if they really do have some sort of psychic gift? To what extent can we even trust the supposed scientific ‘experts’? Might not the scientific community itself be part of a conspiracy to hide the truth about Area 51 in Nevada, Earth’s flatness or the 9/11 terrorist attacks being an inside job?
Most of us really struggle when it comes to assessing such ‘weird’ beliefs – myself included. Of course, we have our hunches about what’s most likely to be true. But when it comes to pinning down precisely why such beliefs are or aren’t reasonable, even the most intelligent and well educated of us can quickly find ourselves out of our depth. For example, while most would pooh-pooh belief in fairies, Arthur Conan Doyle, the creator of the quintessentially rational detective Sherlock Holmes, actually believed in them and wrote a book presenting what he thought was compelling evidence for their existence.
When it comes to weird beliefs, it’s important we avoid being closed-minded naysayers with our fingers in our ears, but it’s also crucial that we avoid being credulous fools. We want, as far as possible, to be reasonable.
I’m a philosopher who has spent a great deal of time thinking about the reasonableness of such ‘weird’ beliefs. Here I present five key pieces of advice that I hope will help you figure out for yourself what is and isn’t reasonable.
Let’s begin with an illustration of the kind of case that can so spectacularly divide opinion. In 1976, six workers reported a UFO over the site of a nuclear plant being constructed near the town of Apex, North Carolina. A security guard then reported a ‘strange object’. The police officer Ross Denson drove over to investigate and saw what he described as something ‘half the size of the Moon’ hanging over the plant. The police also took a call from local air traffic control about an unidentified blip on their radar.
The next night, the UFO appeared again. The deputy sheriff described ‘a large lighted object’. An auxiliary officer reported five lighted objects that appeared to be burning and about 20 times the size of a passing plane. The county magistrate described a rectangular football-field-sized object that looked like it was on fire.
Finally, the press got interested. Reporters from the Star newspaper drove over to investigate. They too saw the UFO. But when they tried to drive nearer, they discovered that, weirdly, no matter how fast they drove, they couldn’t get any closer.
This report, drawn from Philip J Klass’s bookUFOs: The Public Deceived (1983), is impressive: it involves multiple eyewitnesses, including police officers, journalists and even a magistrate. Their testimony is even backed up by hard evidence – that radar blip.
Surely, many would say, given all this evidence, it’s reasonable to believe there was at least something extraordinary floating over the site. Anyone who failed to believe at least that much would be excessively sceptical – one of those perpetual naysayers whose kneejerk reaction, no matter how strong the evidence, is always to pooh-pooh.
What’s most likely to be true: that there really was something extraordinary hanging over the power plant, or that the various eyewitnesses had somehow been deceived? Before we answer, here’s my first piece of advice.NEED TO KNOWTHINK IT THROUGHKEY POINTSWHY IT MATTERSLINKS & BOOKS
Think it through
1. Expect unexplained false sightings and huge coincidences
Our UFO story isn’t over yet. When the Star’s two-man investigative team couldn’t get any closer to the mysterious object, they eventually pulled over. The photographer took out his long lens to take a look: ‘Yep … that’s the planet Venus all right.’ It was later confirmed beyond any reasonable doubt that what all the witnesses had seen was just a planet. But what about that radar blip? It was a coincidence, perhaps caused by a flock of birds or unusual weather.
What moral should we draw from this case? Not, of course, that because this UFO report turned out to have a mundane explanation, all such reports can be similarly dismissed. But notice that, had the reporters not discovered the truth, this story would likely have gone down in the annals of ufology as one of the great unexplained cases. The moral I draw is that UFO cases that have multiple eyewitnesses and even independent hard evidence (the radar blip) may well crop up occasionally anyway, even if there are no alien craft in our skies.
We tend significantly to underestimate how prone to illusion and deception we are when it comes to the wacky and weird. In particular, we have a strong tendency to overdetect agency – to think we are witnessing a person, an alien or some other sort of creature or being – where in truth there’s none.
Psychologists have developed theories to account for this tendency to overdetect agency, including that we have evolved what’s called a hyperactive agency detecting device. Had our ancestors missed an agent – a sabre-toothed tiger or a rival, say – that might well have reduced their chances of surviving and reproducing. Believing an agent is present when it’s not, on the other hand, is likely to be far less costly. Consequently, we’ve evolved to err on the side of overdetection – often seeing agency where there is none. For example, when we observe a movement or pattern we can’t understand, such as the retrograde motion of a planet in the night sky, we’re likely to think the movement is explained by some hidden agent working behind the scenes (that Mars is actually a god, say).
One example of our tendency to overdetect agency is pareidolia: our tendency to find patterns – and, in particular, faces – in random noise. Stare at passing clouds or into the embers of a fire, and it’s easy to interpret the randomly generated shapes we see as faces, often spooky ones, staring back.
And, of course, nature is occasionally going to throw up the face-like patterns just by chance. One famous illustration was produced in 1976 by the Mars probe Viking Orbiter 1. As the probe passed over the Cydonia region, it photographed what appeared to be an enormous, reptilian-looking face 800 feet high and nearly 2 miles long. Some believe this ‘face on Mars’ was a relic of an ancient Martian civilisation, a bit like the Great Sphinx of Giza in Egypt. A book called TheMonuments of Mars: A City on the Edge of Forever (1987) even speculated about this lost civilisation. However, later photos revealed the ‘face’ to be just a hill that looks face-like when lit a certain way. Take enough photos of Mars, and some will reveal face-like features just by chance.
The fact is, we should expect huge coincidences. Millions of pieces of bread are toasted each morning. One or two will exhibit face-like patterns just by chance, even without divine intervention. One such piece of toast that was said to show the face of the Virgin Mary (how do we know what she looked like?) was sold for $28,000. We think about so many people each day that eventually we’ll think about someone, the phone will ring, and it will be them. That’s to be expected, even if we’re not psychic. Yet many put down such coincidences to supernatural powers.
2. Understand what strong evidence actually is
When is a claim strongly confirmed by a piece of evidence? The following principle appears correct (it captures part of what confirmation theorists call the Bayes factor; for more on Bayesian approaches to assessing evidence, see the link at the end):
Evidence confirms a claim to the extent that the evidence is more likely if the claim is true than if it’s false.
Here’s a simple illustration. Suppose I’m in the basement and can’t see outside. Jane walks in with a wet coat and umbrella and tells me it’s raining. That’s pretty strong evidence it’s raining. Why? Well, it is of course possible that Jane is playing a prank on me with her wet coat and brolly. But it’s far more likely she would appear with a wet coat and umbrella and tell me it’s raining if that’s true than if it’s false. In fact, given just this new evidence, it may well be reasonable for me to believe it’s raining.
Here’s another example. Sometimes whales and dolphins are found with atavistic limbs – leg-like structures – where legs would be found on land mammals. These discoveries strongly confirm the theory that whales and dolphins evolved from earlier limbed, land-dwelling species. Why? Because, while atavistic limbs aren’t probable given the truth of that theory, they’re still far more probable than they would be if whales and dolphins weren’t the descendants of such limbed creatures.
The Mars face, on the other hand, provides an example of weak or non-existent evidence. Yes, if there was an ancient Martian civilisation, then we might discover what appeared to be a huge face built on the surface of the planet. However, given pareidolia and the likelihood of face-like features being thrown up by chance, it’s about as likely that we would find such face-like features anyway, even if there were no alien civilisation. That’s why such features fail to provide strong evidence for such a civilisation.
So now consider our report of the UFO hanging over the nuclear power construction site. Are several such cases involving multiple witnesses and backed up by some hard evidence (eg, a radar blip) good evidence that there are alien craft in our skies? No. We should expect such hard-to-explain reports anyway, whether or not we’re visited by aliens. In which case, such reports are not strong evidence of alien visitors.
Being sceptical about such reports of alien craft, ghosts or fairies is not knee-jerk, fingers-in-our-ears naysaying. It’s just recognising that, though we might not be able to explain the reports, they’re likely to crop up occasionally anyway, whether or not alien visitors, ghosts or fairies actually exist. Consequently, they fail to provide strong evidence for such beings.
It was the scientist Carl Sagan who in 1980 said: ‘Extraordinary claims require extraordinary evidence.’ By an ‘extraordinary’ claim, Sagan appears to have meant an extraordinarily improbable claim, such as that Alice can fly by flapping her arms, or that she can move objects with her mind. On Sagan’s view, such claims require extraordinarily strong evidence before we should accept them – much stronger than the evidence required to support a far less improbable claim.
Suppose for example that Fred claims Alice visited him last night, sat on his sofa and drank a cup of tea. Ordinarily, we would just take Fred’s word for that. But suppose Fred adds that, during her visit, Alice flew around the room by flapping her arms. Of course, we’re not going to just take Fred’s word for that. It’s an extraordinary claim requiring extraordinary evidence.
If we’re starting from a very low base, probability-wise, then much more heavy lifting needs to be done by the evidence to raise the probability of the claim to a point where it might be reasonable to believe it. Clearly, Fred’s testimony about Alice flying around the room is not nearly strong enough.
Similarly, given the low prior probability of the claims that someone communicated with a dead relative, or has fairies living in their local wood, or has miraculously raised someone from the dead, or can move physical objects with their mind, we should similarly set the evidential bar much higher than we would for more mundane claims.
4. Beware accumulated anecdotes
Once we’ve formed an opinion, it can be tempting to notice only evidence that supports it and to ignore the rest. Psychologists call this tendency confirmation bias.
For example, suppose Simon claims a psychic ability to know the future. He can provide 100 examples of his predictions coming true, including one or two dramatic examples. In fact, Simon once predicted that a certain celebrity would die within 12 months, and they did!
Do these 100 examples provide us with strong evidence that Simon really does have some sort of psychic ability? Not if Simon actually made many thousands of predictions and most didn’t come true. Still, if we count only Simon’s ‘hits’ and ignore his ‘misses’, it’s easy to create the impression that he has some sort of ‘gift’.
Confirmation bias can also create the false impression that a therapy is effective. A long list of anecdotes about patients whose condition improved after a faith healing session can seem impressive. People may say: ‘Look at all this evidence! Clearly this therapy has some benefits!’ But the truth is that such accumulated anecdotes are usually largely worthless as evidence.
It’s also worth remembering that such stories are in any case often dubious. For example, they can be generated by the power of suggestion: tell people that a treatment will improve their condition, and many will report that it has, even if the treatment actually offers no genuine medical benefit.
Impressive anecdotes can also be generated by means of a little creative interpretation. Many believe that the 16th-century seer Nostradamus predicted many important historical events, from the Great Fire of London to the assassination of John F Kennedy. However, because Nostradamus’s prophecies are so vague, nobody was able to use his writings to predict any of these events before they occurred. Rather, his texts were later creatively interpreted to fit what subsequently happened. But that sort of ‘fit’ can be achieved whether Nostradamus had extraordinary abilities or not. In which case, as we saw under point 2 above, the ‘fit’ is not strong evidence of such abilities.
5. Beware ‘But it fits!’
Often, when we’re presented with strong evidence that our belief is false, we can easily change our mind. Show me I’m mistaken in believing that the Matterhorn is near Chamonix, and I’ll just drop that belief.
However, abandoning a belief isn’t always so easy. That’s particularly the case for beliefs in which we have invested a great deal emotionally, socially and/or financially. When it comes to religious and political beliefs, for example, or beliefs about the character of our close relatives, we can find it extraordinarily difficult to change our minds. Psychologists refer to the discomfort we feel in such situations – when our beliefs or attitudes are in conflict – as cognitive dissonance.
Perhaps the most obvious strategy we can employ when a belief in which we have invested a great deal is threatened is to start explaining away the evidence.
Here’s an example. Dave believes dogs are spies from the planet Venus – that dogs are Venusian imposters on Earth sending secret reports back to Venus in preparation for their imminent invasion of our planet. Dave’s friends present him with a great deal of evidence that he’s mistaken. But, given a little ingenuity, Dave finds he can always explain away that evidence:
‘Dave, dogs can’t even speak – how can they communicate with Venus?’
‘They can speak, they just hide their linguistic ability from us.’
‘But Dave, dogs don’t have transmitters by which they could relay their messages to Venus – we’ve searched their baskets: nothing there!’
‘Their transmitters are hidden in their brain!’
‘But we’ve X-rayed this dog’s brain – no transmitter!’
‘The transmitters are made from organic material indistinguishable from ordinary brain stuff.’
‘But we can’t detect any signals coming from dogs’ heads.’
‘This is advanced alien technology – beyond our ability to detect it!’
‘Look Dave, Venus can’t support dog life – it’s incredibly hot and swathed in clouds of acid.’
‘The dogs live in deep underground bunkers to protect them. Why do you think they want to leave Venus?!’
You can see how this conversation might continue ad infinitum. No matter how much evidence is presented to Dave, it’s always possible for him to cook up another explanation. And so he can continue to insist his belief is logicallyconsistent with the evidence.
But, of course, despite the possibility of his endlessly explaining away any and all counterevidence, Dave’s belief is absurd. It’s certainly not confirmed by the available evidence about dogs. In fact, it’s powerfully disconfirmed.
The moral is: showing that your theory can be made to ‘fit’ – be consistent with – the evidence is not the same thing as showing your theory is confirmed by the evidence. However, those who hold weird beliefs often muddle consistency and confirmation.
Take young-Earth creationists, for example. They believe in the literal truth of the Biblical account of creation: that the entire Universe is under 10,000 years old, with all species being created as described in the Book of Genesis.
Polls indicate that a third or more of US citizens believe that the Universe is less than 10,000 years old. Of course, there’s a mountain of evidence against the belief. However, its proponents are adept at explaining away that evidence.
Take the fossil record embedded in sedimentary layers revealing that today’s species evolved from earlier species over many millions of years. Many young-Earth creationists explain away this record as a result of the Biblical flood, which they suppose drowned and then buried living things in huge mud deposits. The particular ordering of the fossils is supposedly accounted for by different ecological zones being submerged one after the other, starting with simple marine life. Take a look at the Answers in Genesis website developed by the Bible literalist Ken Ham, and you’ll discover how a great deal of other evidence for evolution and a billions-of-years-old Universe is similarly explained away. Ham believes that, by explaining away the evidence against young-Earth creationism in this way, he can show that his theory ‘fits’ – and so is scientifically confirmed by – that evidence:
Increasing numbers of scientists are realising that when you take the Bible as your basis and build your models of science and history upon it, all the evidence from the living animals and plants, the fossils, and the cultures fits. This confirms that the Bible really is the Word of God and can be trusted totally. [my italics]
According to Ham, young-Earth creationists and evolutionists do the same thing: they look for ways to make the evidence fit the theory to which they have already committed themselves:
Evolutionists have their own framework … into which they try to fit the data. [my italics]
But, of course, scientists haven’t just found ways of showing how the theory of evolution can be made consistent with the evidence. As we saw above, that theory really is strongly confirmed by the evidence.
Any theory, no matter how absurd, can, with sufficient ingenuity be made to ‘fit’ the evidence: even Dave’s theory that dogs are Venusian spies. That’s not to say it’s reasonable or well confirmed.
Of course, it’s not always unreasonable to explain away evidence. Given overwhelming evidence that water boils at 100 degrees Celsius at 1 atmosphere, a single experiment that appeared to contradict that claim might reasonably be explained away as a result of some unidentified experimental error. But as we increasingly come to rely on explaining away evidence in order to try to convince ourselves of the reasonableness of our belief, we begin to drift into delusion.
Key points – How to think about weird things
Expect unexplained false sightings and huge coincidences. Reports of mysterious and extraordinary hidden agents – such as angels, demons, spirits and gods – are to be expected, whether or not such beings exist. Huge coincidences – such as a piece of toast looking very face-like – are also more or less inevitable.
Understand what strong evidence is. If the alleged evidence for a belief is scarcely more likely if the belief is true than if it’s false, then it’s not strong evidence.
Extraordinary claims require extraordinary evidence. If a claim is extraordinarily improbable – eg, the claim that Alice flew round the room by flapping her arms – much stronger evidence is required for reasonable belief than is required for belief in a more mundane claim, such as that Alice drank a cup of tea.
Beware accumulated anecdotes. A large number of reports of, say, people recovering after taking an alternative medicine or visiting a faith healer is not strong evidence that such treatments actually work.
Beware ‘But it fits!’ Any theory, no matter how ludicrous (even the theory that dogs are spies from Venus), can, with sufficient ingenuity, always be made logically consistent with the evidence. That’s not to say it’s confirmed by the evidence.
Why it matters
Sometimes, belief in weird things is pretty harmless. What does it matter if Mary believes there are fairies at the bottom of her garden, or Joe thinks his dead aunty visits him occasionally? What does it matter if Sally is a closed-minded naysayer when it comes to belief in psychic powers? However, many of these beliefs have serious consequences.
Clearly, people can be exploited. Grieving parents contact spiritualists who offer to put them in contact with their dead children. Peddlers of alternative medicine and faith healing charge exorbitant fees for their ‘cures’ for terminal illnesses. If some alternative medicines really work, casually dismissing them out of hand and refusing to properly consider the evidence could also cost lives.
Lives have certainly been lost. Many have died who might have been saved because they believed they should reject conventional medicine and opted for ineffective alternatives.
Huge amounts of money are often also at stake when it comes to weird beliefs. Psychic reading and astrology are huge businesses with turnovers of billions of dollars per year. Often, it’s the most desperate who will turn to such businesses for advice. Are they, in reality, throwing their money away?
Many ‘weird’ beliefs also have huge social and political implications. The former US president Ronald Reagan and his wife Nancy were reported to have consulted an astrologer before making any major political decision. Conspiracy theories such as QAnon and the Sandy Hook hoax shape our current political landscape and feed extremist political thinking. Mainstream religions are often committed to miracles and gods.
In short, when it comes to belief in weird things, the stakes can be very high indeed. It matters that we don’t delude ourselves into thinking we’re being reasonable when we’re not.
Links & books
The Atlanticarticle ‘The Cognitive Biases Tricking Your Brain’ (2018) by Ben Yagoda provides a great introduction to thinking that can lead us astray, including confirmation bias.
The UK-based magazineThe Skeptic provides some high-quality free articles on belief in weird things. Well worth a subscription.
The Skeptical Inquirermagazine in the US is also excellent, and provides some free content.
The RationalWiki portal provides many excellent articles on pseudoscience.
The British mathematician Norman Fenton, professor of risk information management at Queen Mary University of London, provides a brief online introduction to Bayesian approaches to assessing evidence.
My bookBelieving Bullshit: How Not to Get Sucked into an Intellectual Black Hole (2011) identifies eight tricks of the trade that can turn flaky ideas into psychological flytraps – and how to avoid them.
The textbookHow to Think About Weird Things: Critical Thinking for a New Age (2019, 8th ed) by the philosophers Theodore Schick and Lewis Vaughn, offers step-by-step advice on sorting through reasons, evaluating evidence and judging the veracity of a claim.
The bookCritical Thinking (2017) by Tom Chatfield offers a toolkit for what he calls ‘being reasonable in an unreasonable world’.
Three new books lay bare the weirdness of how our brains process the world around us.
Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress.
Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You, Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.”
Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge.
“Chairs aren’t red,” Seth writes, “just as they aren’t ugly or old-fashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.”
Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.”
Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligence-powered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.”
Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick.
Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses, Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience:
At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy.
Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry:
The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing.
Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to.
The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition, sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.”
As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly changing, it’s surprising that we can recognize anything at all.”
In What Makes Us Smart, Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.”
“The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes.
He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going on inside our own head. As Damji wrote in a letter to Barry:
When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow.
Matthew Hutson is a contributing writer at The New Yorker and a freelance science and tech writer.
In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?
In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’
Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:
Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?
Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.
Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.
Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.
Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.
A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.
This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’
An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.
The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’
Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:
Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.
But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’
Self-awareness is something that can be cultivated
Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.
Adelson’s checkerboard. Courtesy Wikipedia
Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.
These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.
Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.
We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.
This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.
This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.
Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?
Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.
A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.
There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.
Clear overlap between brain activations involved in metacognition and mindreading was observed
So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.
Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.
At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.
Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.
Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recentstudies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.
Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.
A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.
A simple route for improving self-awareness is to take a third-person perspective on ourselves
Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.
There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:
O wad some Power the giftie gie us To see oursels as ithers see us! It wad frae mony a blunder free us…
(Oh, would some Power give us the gift To see ourselves as others see us! It would from many a blunder free us… )
This is HIDDEN BRAIN. I’m Shankar Vedantam. Last year, my family and I took a vacation to Alaska. This was a much needed long-planned break. The best part, I got to walk on the top of a glacier.
(SOUNDBITE OF FOOTSTEPS)
VEDANTAM: The pale blue ice was translucent. Sharp ridges opened up into crevices dozens of feet deep. Every geological feature, every hill, every valley was sculpted in ice. It was a sunny day, and I spotted a small stream of melted water. I got on the ground and drank some. I wondered how long this water had remained frozen.
The little stream is not the only ice that’s melting in Alaska. The Mendenhall Glacier, one of the chief tourist attractions in Juneau, has retreated over one and a half miles in the last half-century. Today, you can only see a small sliver of the glacier’s tongue from a lookout. I caught up with John Neary, a forest service official, who tries to explain to visitors the scale of the changes that they’re witnessing.
JOHN NEARY: I would say that right now, we’re looking at a glacier that’s filling up. Out of our 180-degree view we have, we’re looking at maybe 10 or 15 degrees of it, whereas if we stood in this same place 100 years ago, it would have filled up about 160 degrees of our view.
VEDANTAM: You are kidding, 160 degrees of our view.
NEARY: Exactly. That’s the reality of how big this was, and it’s been retreating up this valley at about 40 or 50 feet a year, most recently 400 feet a year. And even more dramatically recently is the thinning and the narrowing as it’s just sort of collapsed in on itself in the bottom of this valley. Instead of dominating much of the valley and being able to see white as a large portion of the landscape, it’s now becoming this little ribbon that’s at the bottom.
VEDANTAM: John is a quiet, soft-spoken man. In recent years, as he’s watched the glacier literally recede before his eyes, he started to speak up, not just about what’s happening but what it means.
But as I was chatting with John, a visitor came up to talk to him. The man said he used to serve in the Air Force and had last seen the Mendenhall Glacier a quarter-century ago. There was a look in the man’s eyes. It was a combination of awe and horror. How could this have happened, the man asked John? Why is this happening?
NEARY: In many ways, people don’t want to grasp the reality. It’s a scary reality to try to grasp. And so what they naturally want to do is assume, well, this has always happened. It will happen in the future, and we’ll survive, won’t we? They want an assurance from me. But I don’t give give it to them. I don’t think it’s my job to give them that assurance.
I think they need to grasp the reality of the fact that we are entering into a time when, yes, glacial advance and retreat has happened 25 different times to North America over its long life but never at the rate and the scale that we see now. And in the very quick rapidity of it means that species probably won’t be able to adapt the way that they have in the past over a longer period of time.
VEDANTAM: To be clear, the Mendenhall Glacier’s retreat in and of itself is not proof of climate change. That evidence comes from a range of scientific measurements and calculations. But the glacier is a visible symbol of the changes that scientists are documenting.
It’s interesting I think when we – people think about climate change, it tends to be an abstract issue most of the time for most people, that you’re standing in front this magnificent glacier right now and to actually see it receding makes it feel real and visceral in a way that it just isn’t when I’m living in Washington, D.C.
NEARY: No, I agree. I think that for too many people, the issue is some Micronesian island that’s having an extra inch of water this year on their shorelines or it’s some polar bears far up in the Arctic that they’re really not connected with.
But when they realize, they come here and they’re on this nice day like we’re experiencing right now with the warm sun and they start to think about this glacier melting and why it’s receding, why it’s disappearing, why it doesn’t look like that photo just 30 years ago up in the visitor’s center, it becomes real for them, and they have to start grapple with the issues behind it.
(SOUNDBITE OF MUSIC)
VEDANTAM: I could see tourists turning these questions over in their minds as they watch the glacier. So even though I had not planned to do any reporting, I started interviewing people using the only device I had available, my phone.
DALE SINGER: I just think it’s a shame that we are losing something pretty precious and pretty different in the world.
VEDANTAM: This is Dale Singer (ph). She and her family came to Alaska on a cruise to celebrate a couple of family birthdays. This was her second trip to Mendenhall.
She came about nine years ago, but the weather was so foggy, she couldn’t get a good look. She felt compelled to come back. I asked Dale why she thought the glacier was retreating.
SINGER: Global warming, whether we like to admit it or not, it’s our fault. Or something we’re doing is affecting climate change.
VEDANTAM: Others are not so sure. For some of Dale’s fellow passengers on her cruise, this is a touchy topic.
SINGER: Somebody just said they went to a lecture and – on the ship, and the lecturer did not use the word global warming nor climate change because he didn’t want to offend passengers. So there are still people who refuse to admit it.
(SOUNDBITE OF MUSIC)
VEDANTAM: As I was standing next to John, one man carefully came up and listened to his account of the science of climate change. When John was done talking, the man told him that he wouldn’t trust scientists as far as he could throw them. Climate change was all about politics, he said.
I asked the man for an interview, but he declined. He said his company had contracts with the federal government. And if bureaucrats in the Obama administration heard his skeptical views on climate change, those contracts might mysteriously disappear. I caught up with another tourist. I asked Michael Bull (ph) if he believed climate change was real.
MICHAEL BULL: No, I think there’s global climate change, but I question whether it’s all due to human interaction with the Earth. Yes, you can’t deny that the climate is changing.
VEDANTAM: Yeah.
BULL: But the causation of that I’m not sold on as being our fault.
VEDANTAM: Michael was worried his tour bus might leave without him, so he answered my question about whether the glacier’s retreat was cause for alarm standing next to the idling bus.
BULL: So what’s the bad part of the glacier receding? And, you know, from what John said to me, if it’s the rate that which – and the Earth can’t adapt, that makes sense to me. But I think the final story is yet to be written.
VEDANTAM: Yeah.
BULL: I think Mother Earth pushes back. So I don’t think we’re going to destroy her because I think she’ll take care of us before we take care of her.
(SOUNDBITE OF MUSIC)
VEDANTAM: Nugget Falls is a beautiful waterfall that empties into Mendenhall Lake. When John first came to Alaska in 1982, the waterfall was adjacent to the glacier. Today, there’s a gap of three-quarters of a mile between the waterfall and the glacier.
SUE SCHULTZ: The glacier has receded unbelievably. It’s quite shocking.
VEDANTAM: This is Sue Schultz. She said she lived in Juneau back in the 1980s. This was her first time back in 28 years. What did it look like 28 years ago?
SCHULTZ: The bare rock that you see to the left as you face the glacier was glacier. And we used to hike on the other side of it. And you could take a trail right onto the glacier.
VEDANTAM: And what about this way? I understand the glacier actually came significantly over to this side…
SCHULTZ: Yes.
VEDANTAM: …Close to Nugget Falls.
SCHULTZ: Yes, it – that’s true. It was really close. In fact, the lake was a lot smaller, obviously (laughter). I mean, yeah, it’s quite incredible.
VEDANTAM: And so what’s your reaction when you see it?
SCHULTZ: Global warming, we need to pay attention.
(SOUNDBITE OF MUSIC)
TERRY LAMBERT: Even if it all melts, it’s not going to be the end of the world, so I’m not worried.
VEDANTAM: Terry Lambert is a tourist from Southern California. He’s never visited Mendenhall before. He thinks the melting glacier is just part of nature’s plan.
LAMBERT: Well, it’s just like earthquakes and floods and hurricanes. They’re all just all part of what’s going on. You can’t control it. You can’t change it. And I personally don’t think it’s something that man’s doing that’s making that melt.
VEDANTAM: I mentioned to Terry some of the possible consequences of climate change on various species. They could be changes. Species could – some species could be advantaged. Some species could be disadvantaged.
The ecosystem is changing. You’re going to have flooding. You’re going to have weather events, right? There could be consequences that affect you and I.
LAMBERT: Yes, but like I say, it’s so far in the future I’m not worried about it.
VEDANTAM: I realized at that moment that the debate over climate change is no longer really about science unless the science you’re talking about is the study of human behavior.
I asked John why he thought so many people were unwilling to accept the scientific consensus that climate change was having real consequences.
NEARY: The inability to do anything about it themselves – because it’s threatening to think about giving up your car, giving up your oil heater in your house or giving up, you know, many of the things that you’ve become accustomed to. They seem very threatening to them.
And, you know, really, I’ve looked at some of the brain science, actually, and talked to folks at NASA and Earth and Sky, and they’ve actually talked about how when that fear becomes overriding for people, they use a part of their brain that’s the very primitive part that has to react.
It has to instantly come to a conclusion so that it can lead to an action, whereas what we need to think about is get rid of that fear and start thinking logically. Start thinking creatively. Allow a different part of the brain to kick in and really think how we as humans can reverse this trend that we’ve caused.
VEDANTAM: Coming up, we explore why the human brain might not be well-designed to grapple with the threat of climate change and what we can do about it. Stay with us.
(SOUNDBITE OF MUSIC)
VEDANTAM: This is HIDDEN BRAIN. I’m Shankar Vedantam. While visiting the Mendenhall Glacier with my family last year, I started thinking more and more about the intersection between climate change and human behavior.
When I got back to Washington, D.C., I called George Marshall. He’s an environmentalist who, like John Neary, tries to educate people about global climate change.
GEORGE MARSHALL: I am the founder of Climate Outreach, and I’m the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.”
VEDANTAM: As the book’s title suggests, George believes that the biggest roadblock in the battle against climate change may lie inside the human brain. I call George at his home in Wales.
(SOUNDBITE OF MUSIC)
VEDANTAM: You’ve spent some time talking with Daniel Kahneman, the famous psychologist who won the Nobel Prize in economics. And he actually presented a very pessimistic view that we would actually come to terms with the threat of climate change.
MARSHALL: He said to me that we are as humans very poor at dealing with issues further in the future. We tend to be very focused on the short term. We tend to discount would be the economic term that – to reduce the value of things happening in the future the further away they are.
He says we’re very cost averse. So that’s to say when there is a reward, we respond strongly. But when there’s a cost, we prefer to push it away just as, you know, I myself would try and leave until the very last minute, you know, filling in my tax return. I mean, it’s just I want to deal with these things. And he says, well, we’re reluctant to deal with uncertainty.
If things aren’t certain, we – or we perceive them to be, we just say, well, come back and tell me when they’re certain. What he said to me was in his view that climate change is the worst possible combination because it’s not only in the future but it’s also in the future and uncertain, and it’s in the future uncertain and involving costs.
And his own experiments – and he’s done many, many of these over the years – show that in this combination, we have a very strong tendency just to push things on one side. And I think this in some ways explains how so many people if you ask them will say, yes, I regard climate change to be a threat.
But if you go and you ask them – and this happens every year in surveys – what are the most important issues, what are the – strangely, almost everybody seems to forget about climate change. So when we focus on it, we know it’s there, but we can somehow push it away.
VEDANTAM: You tell an amusing story in your book about some colleagues who were worried about a cellphone tower being erected in their neighborhood…
MARSHALL: (Laughter).
VEDANTAM: …And the very, very different reaction of these colleagues to the cellphone tower then to it’s sort of the amorphous threat of climate change.
MARSHALL: They were my neighbors, my entire community. I was living at that time in Oxford, which is – many of your listeners know is a university town. So it would be like living in, you know, Harvard or Berkeley or somewhere where most of the people were in various ways involved in the university, highly educated. A mobile phone master is being set up in the middle alongside actually, a school playground, enormous outcry. Everybody mobilized.
Down to the local church hall, they were all going to stop it. People were even going to play lay themselves down in front of a bulldozers to prevent it because it was here. It was now. There was an enemy, which was this external mobile phone company. We’re going to come, and they were going to put up this mast. It brings in the threat psychologists would call the absolute fear of radiation. This is what’s called a dread fear and so on.
Now, the science, if we go back to the core science, says that this mobile phone master is as far as we could possibly say harmless. You know, the amount of radiation or – of any kind you get off a single mobile phone mast has never been found to have the slightest impact on anyone. But they were very mobilized. At the same – oh, thank you for having me on. None of them would come. It simply didn’t have those qualities.
VEDANTAM: You have a very revealing anecdote in your book about the economist Thomas Schelling, who was once in a major traffic jam.
MARSHALL: So Schelling, again, a Nobel prize-winning economist, and he’s wondering what’s going on. The traffic is moving very, very, very slowly, and then they’re creeping along and creeping along, and half an hour along the road, they finally realized what had happened.
But there’s a mattress lying right in the middle of the middle lane of the road. What happens, he notices – and he does the same – is what when they reach the mattress, people will simply drive past it and keep going. In other words, the thing that had caused them to become delayed was not something that anyone was prepared to stop and remove from the road.
They just leave the mattress there, and then they keep driving past. Because in a way, why would they remove that mattress from the road because they have already paid the price of getting there? They’ve already had the delay. It’s something where the benefit goes to other people. The argument being that, of course, it’s very hard, especially when people are motivated largely through personal rewards, to get them to do things.
VEDANTAM: It’s interesting that the same narrative affects the way we talk about climate change internationally. There are many countries who now say, look, you know, I’ve already paid the price. I’m paying the price right now for the actions of other people for the, you know, things that other people have or have not done.
I’m bearing that cost, and you’re asking me now to get out of my car, pull the mattress off the road to bear an additional cost. And the only people who will benefit from that are people who are not me. The collective problems in the end have personal consequences.
MARSHALL: I have to say that the way what one talks about this also shows a way that interpretation is biased by your own politics or your own view. This has been labeled for a long time the tragedy of the commons, the idea being that unless – that people will – if it’s in their own self-interest, destroy the very thing that sustains them because it’s not in their personal interest to do something if they don’t see other people doing it. And in a way, it’s understandable.
But of course, that depends on a view of a world where you see people as being motivated entirely by their own personal rewards. We also know that people are motivated by their sense of identity and their sense of belonging. And we know very well not least of all in times of major conflict or war that people are prepared to make enormous personal sacrifices from which they personally derive nothing except loss, but they’re making that in the interests of the greater good.
For a long time with climate change, we’ve made a mistake of talking about this solely in terms of something which is economic. What are the economic costs, and what are the economic benefits? And we still do this. But of course, really, the motivations for why we want to act on this is what we want to defend the world what we care about and a world we love, and we want to do so for ourselves and for the people who are then to come.
VEDANTAM: So, George, there obviously is one domain in life where you can see people constantly placing these sacred values above their selfish self-interest. You know, I’m thinking here about the many, many religions we have in the world that get people to do all kinds of things that an economist would say is not in their rational self-interest.
People give up food. People give up water. People have, you know, suffer enormous personal privations. People sometimes choose chastity for life, I mean, huge costs No, people are willing to bear. And they’re not doing it because someone says, at the end of the year, I’m going to give you an extra 200 bucks in your paycheck or an extra $2,000 in your paycheck. They’re doing it because they believe these are sacred values that are not negotiable.
MARSHALL: No, well, and not just economists would find those behaviors strange, but Professor Kahneman or kind of pure cognitive psychology might as well because these are people who are struggling with and – but also believe passionately in things which are in the long-term extremely uncertain and require personal cost. And yet people do so.
It’s very important to stress that, you know, when we try and when we talk about climate change and religion that there’s absolutely no sense at all that climate change is or can or should ever be like a religion. It’s not. It’s grounded in science. But we can also learn
I think a great deal from religions about how to approach these issues, these uncertain issues and how to create I think a community of shared belief and shared conviction that something is important.
VEDANTAM: Right. I mean, if you look at sort of human history with sort of the broad view, you know, you don’t actually have to be a religious person to acknowledge that religion has played a very, very important role in the lives of millions of people over thousands of years.
And if it’s done so, then a scientific approach would say, there is something about the nature of religious belief or the practice of religion that harnesses what our brains can accommodate, that they harness our yearning to be part of a tribe, our yearning to be connected to deeper and grander values than ourselves, our yearning in some ways to do things for our fellow person in a way that might not be tangible in the here and now but might actually pay off as you say not just for future generations but even in the hereafter.
MARSHALL: Well, and the faiths that dominate, the half a dozen faiths which are the strongest ones in the world, are the ones that have been best at doing that. There’s a big mistake with climate change because it comes from science, what we assume it just somehow soaks into us.
It’s very clear that just hitting people over the head with more and more and more data and graphs isn’t working. On my Internet feed – I’m on all of the main scientific feeds – there is a new paper every day that says that not only is it bad, but it’s worse than we thought, and it’s extremely, extremely serious, so serious, actually, that we’re finding it very hard to even to find the words to describe it. That doesn’t move people. In fact, actually, it tends to push them away.
However, if we can understand that there are other things which bind us together, I think that we can find yet new language. I think it’s also very important to recognize that the divides that are on climate change are social, not scientific. They’re social and political, that the single biggest determinants of whether you accept it or you don’t accept it is your political values.
And that suggests for the solutions to this are not scientific and maybe psychology. They’re cultural. We have to find ways of saying, sure, you know, we are going to disagree on things politically, but we have things in common that we all care about that are going to have to bring us together.
VEDANTAM: George Marshall is the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.” George, thank you for joining me today on HIDDEN BRAIN.
MARSHALL: You’re very welcome. I enjoyed it. Thank you.
VEDANTAM: The HIDDEN BRAIN podcast is produced by Kara McGuirk-Alison, Maggie Penman and Max Nesterak. Special thanks this week to Daniel Schuken (ph). To continue the conversation about human behavior and climate change, join us on Facebook and Twitter.
If you liked this episode, consider giving us a review on iTunes or wherever you listen to your podcasts so others can find us. I’m Shankar Vedantam, and this is NPR.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.
The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.
New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.
Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”
The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.
Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”
Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.
The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.
Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”
Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”
At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.
Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.
Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”
“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”
It’s called Dunbar’s number: an influential and oft-repeated theory suggesting the average person can only maintain about 150 stable social relationships with other people.
Proposed by British anthropologist and evolutionary psychologist Robin Dunbar in the early 1990s, Dunbar’s number, extrapolated from research into primate brain sizes and their social groups, has since become a ubiquitous part of the discourse on human social networks.
But just how legitimate is the science behind Dunbar’s number anyway? According to a new analysis by researchers from Stockholm University in Sweden, Dunbar’s famous figure doesn’t add up.
“The theoretical foundation of Dunbar’s number is shaky,” says zoologist and cultural evolution researcher Patrik Lindenfors.
“Other primates’ brains do not handle information exactly as human brains do, and primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are.”
Dunbar’s number was originally predicated on the idea that the volume of the neocortex in primate brains functions as a constraint on the size of the social groups they circulate amongst.
“It is suggested that the number of neocortical neurons limits the organism’s information-processing capacity and that this then limits the number of relationships that an individual can monitor simultaneously,” Dunbar explained in his foundational 1992 study.
“When a group’s size exceeds this limit, it becomes unstable and begins to fragment. This then places an upper limit on the size of groups which any given species can maintain as cohesive social units through time.”
But as to the original question of whether neocortex size serves as a valid constraint on group size beyond non-human primates, Lindenfors and his team aren’t so sure.
While a number of studies have offered support for Dunbar’s ideas, the new study debunks the claim that neocortex size in primates is equally pertinent to human socialization parameters.
“It is not possible to make an estimate for humans with any precision using available methods and data,” says evolutionary biologist Andreas Wartel.
In their study, the researchers used modern statistical methods including Bayesian and generalized least-squares (GLS) analyses to take another look at the relationship between group size and brain/neocortex sizes in primate brains, with the advantage of updated datasets on primate brains.
The results suggested that stable human group sizes might ultimately be much smaller than 150 individuals – with one analysis suggesting up to 42 individuals could be the average limit, with another estimate ranging between a group of 70 to 107.
Ultimately, however, enormous amounts of imprecision in the statistics suggest that any method like this – trying to compute an average number of stable relationships for any human individual based off brain volume considerations – is unreliable at best.
“Specifying any one number is futile,” the researchers write in their study. “A cognitive limit on human group size cannot be derived in this manner.”
Despite the mainstream attention Dunbar’s number enjoys, the researchers say the majority of primate social evolution research focuses on socio-ecological factors, including foraging and predation, infanticide, and sexual selection – not so much calculations dependent on brain or neocortex volume.
Further, the researchers argue that Dunbar’s number ignores other significant differences in brain physiology between human and non-human primate brains – including that humans develop cultural mechanisms and social structures that can counter socially limiting cognitive factors that might otherwise apply to non-human primates.
“Ecological research on primate sociality, the uniqueness of human thinking, and empirical observations all indicate that there is no hard cognitive limit on human sociality,” the team explains.
“It is our hope, though perhaps futile, that this study will put an end to the use of ‘Dunbar’s number’ within science and in popular media.”
Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.
It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.
As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”
The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.
In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.
Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.
Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.
Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”
In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.
He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.
I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.
But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”
In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.
Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.
His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.
Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.
Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.
Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.
News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.
Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.
They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)
“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”
Joaquin Quiñonero Candela
In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.
Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.
Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”
With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.
If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.
But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”
While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.
“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”
A former AI researcher who joined in 2018
In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.
Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)
But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.
One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”
Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.
That former employee, meanwhile, no longer lets his daughter use Facebook.
Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.
It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.
At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.
Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.
Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.
“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”
At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.
The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.
And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.
Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.
On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.
For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.
Facebook did not grant me an interview with Zuckerberg, but previousreporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.
On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.
It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.
This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.
(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)
But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.
Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.
But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.
The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.
Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it canpersist on the platform.
In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.
Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”
But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.
A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.
“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”
When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.
“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”
Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.
Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.
But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.
This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.
In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.
All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
Ellery Roberts Biddle, editorial director of Ranking Digital Rights
This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.
Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.
Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.
Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”
The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.
In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.
I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”
Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.
I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”
Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.
I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.
“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”
Corrections:We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.
Cambridge University team say their findings could be used to spot people at risk from radicalisation
A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto
Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.
Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.
The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.
The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.
A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.
“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.
She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”
Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.
“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.
In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.
“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”
The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.
The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.
“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”
The coronavirus pandemic has triggered some interesting and unusual changes in our buying behavior
Date: September 10, 2020
Source: University of Technology Sydney
Summary: Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper.
Rushing to stock up on toilet paper before it vanished from the supermarket isle, stashing cash under the mattress, purchasing a puppy or perhaps planting a vegetable patch — the COVID-19 pandemic has triggered some interesting and unusual changes in our behavior.
Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper published in the Journal of Behavioral Economics for Policy.
‘Hoarding in the age of COVID-19’ by behavioral economist Professor Michelle Baddeley, Deputy Dean of Research at the University of Technology Sydney (UTS) Business School, examines a range of cross-disciplinary explanations for hoarding and other behavior changes observed during the pandemic.
“Understanding these economic, social and psychological responses to COVID-19 can help governments and policymakers adapt their policies to limit negative impacts, and nudge us towards better health and economic outcomes,” says Professor Baddeley.
Governments around the world have implemented behavioral insights units to help guide public policy, and influence public decision-making and compliance.
Hoarding behavior, where people collect or accumulate things such as money or food in excess of their immediate needs, can lead to shortages, or in the case of hoarding cash, have negative impacts on the economy.
“In economics, hoarding is often explored in the context of savings. When consumer confidence is down, spending drops and households increase their savings if they can, because they expect bad times ahead,” explains Professor Baddeley.
“Fear and anxiety also have an impact on financial markets. The VIX ‘fear’ index of financial market volatility saw a dramatic 564% increase between November 2019 and March 2020, as investors rushed to move their money into ‘safe haven’ investments such as bonds.”
While shifts in savings and investments in the face of a pandemic might make economic sense, the hoarding of toilet paper, which also occurred across the globe, is more difficult to explain in traditional economic terms, says Professor Baddeley.
Behavioural economics reveals that our decisions are not always rational or in our long term interest, and can be influenced by a wide range of psychological factors and unconscious biases, particularly in times of uncertainty.
“Evolved instincts dominate in stressful situations, as a response to panic and anxiety. During times of stress and deprivation, not only people but also many animals show a propensity to hoard.”
Another instinct that can come to the fore, particularly in times of stress, is the desire to follow the herd, says Professor Baddeley, whose book ‘Copycats and Contrarians’ explores the concept of herding in greater detail.
“Our propensity to follow others is complex. Some of our reasons for herding are well-reasoned. Herding can be a type of heuristic: a decision-making short-cut that saves us time and cognitive effort,” she says.
“When other people’s choices might be a useful source of information, we use a herding heuristic and follow them because we believe they have good reasons for their actions. We might choose to eat at a busy restaurant because we assume the other diners know it is a good place to eat.
“However numerous experiments from social psychology also show that we can be blindly susceptible to the influence of others. So when we see others rushing to the shops to buy toilet paper, we fear of missing out and follow the herd. It then becomes a self-fulfilling prophesy.”
Behavioral economics also highlights the importance of social conventions and norms in our decision-making processes, and this is where rules can serve an important purpose, says Professor Baddeley.
“Most people are generally law abiding but they might not wear a mask if they think it makes them look like a bit of a nerd, or overanxious. If there is a rule saying you have to wear a mask, this gives people guidance and clarity, and it stops them worrying about what others think.
“So the normative power of rules is very important. Behavioral insights and nudges can then support these rules and policies, to help governments and business prepare for second waves, future pandemics or other global crises.”
Humans as a species are adept at using numbers, but our mathematical ability is something we share with a surprising array of other creatures.
One of the key findings over the past decades is that our number faculty is deeply rooted in our biological ancestry, and not based on our ability to use language. Considering the multitude of situations in which we humans use numerical information, life without numbers is inconceivable.
But what was the benefit of numerical competence for our ancestors, before they became Homo sapiens? Why would animals crunch numbers in the first place?
It turns out that processing numbers offers a significant benefit for survival, which is why this behavioural trait is present in many animal populations. Several studies examining animals in their ecological environments suggest that representing numbers enhances an animal’s ability to exploit food sources, hunt prey, avoid predation, navigate its habitat, and persist in social interactions.
Before numerically competent animals evolved on the planet, single-celled microscopic bacteria – the oldest living organisms on Earth – already exploited quantitative information. The way bacteria make a living is through their consumption of nutrients from their environment. Mostly, they grow and divide themselves to multiply. However, in recent years, microbiologists have discovered they also have a social life and are able to sense the presence or absence of other bacteria. In other words, they can sense the number of bacteria.
Take, for example, the marine bacterium Vibrio fischeri. It has a special property that allows it to produce light through a process called bioluminescence, similar to how fireflies give off light. If these bacteria are in dilute water solutions (where they are essentially alone), they make no light. But when they grow to a certain cell number of bacteria, all of them produce light simultaneously. Therefore, Vibrio fischeri can distinguish when they are alone and when they are together.
Sometimes the numbers don’t add up when predators are trying to work out which prey to target (Credit: Alamy)
It turns out they do this using a chemical language. They secrete communication molecules, and the concentration of these molecules in the water increases in proportion to the cell number. And when this molecule hits a certain amount, called a “quorum”, it tells the other bacteria how many neighbours there are, and all the bacteria glow.
This behaviour is called “quorum sensing” – the bacteria vote with signalling molecules, the vote gets counted, and if a certain threshold (the quorum) is reached, every bacterium responds. This behaviour is not just an anomaly of Vibrio fischeri – all bacteria use this sort of quorum sensing to communicate their cell number in an indirect way via signalling molecules.
Remarkably, quorum sensing is not confined to bacteria – animals use it to get around, too. Japanese ants (Myrmecina nipponica), for example, decide to move their colony to a new location if they sense a quorum. In this form of consensus decision making, ants start to transport their brood together with the entire colony to a new site only if a defined number of ants are present at the destination site. Only then, they decide, is it safe to move the colony.
Numerical cognition also plays a vital role when it comes to both navigation and developing efficient foraging strategies. In 2008, biologists Marie Dacke and Mandyam Srinivasan performed an elegant and thoroughly controlled experiment in which they found that bees are able to estimate the number of landmarks in a flight tunnel to reach a food source – even when the spatial layout is changed. Honeybees rely on landmarks to measure the distance of a food source to the hive. Assessing numbers is vital to their survival.
When it comes to optimal foraging, “going for more” is a good rule of thumb in most cases, and seems obvious when you think about it, but sometimes the opposite strategy is favourable. The field mouse loves live ants, but ants are dangerous prey because they bite when threatened. When a field mouse is placed into an arena together with two ant groups of different quantities, then, it surprisingly “goes for less”. In one study, mice that could choose between five versus 15, five versus 30, and 10 versus 30 ants always preferred the smaller quantity of ants. The field mice seem to pick the smaller ant group in order to ensure comfortable hunting and to avoid getting bitten frequently.
Numerical cues play a significant role when it comes to hunting prey in groups, as well. The probability, for example, that wolves capture elk or bison varies with the group size of a hunting party. Wolves often hunt large prey, such as elk and bison, but large prey can kick, gore, and stomp wolves to death. Therefore, there is incentive to “hold back” and let others go in for the kill, particularly in larger hunting parties. As a consequence, wolves have an optimal group size for hunting different prey. For elks, capture success levels off at two to six wolves. However, for bison, the most formidable prey, nine to 13 wolves are the best guarantor of success. Therefore, for wolves, there is “strength in numbers” during hunting, but only up to a certain number that is dependent on the toughness of their prey.
Animals that are more or less defenceless often seek shelter among large groups of social companions – the strength-in-numbers survival strategy hardly needs explaining. But hiding out in large groups is not the only anti-predation strategy involving numerical competence.
In 2005, a team of biologists at the University of Washington found that black-capped chickadees in Europe developed a surprising way to announce the presence and dangerousness of a predator. Like many other animals, chickadees produce alarm calls when they detect a potential predator, such as a hawk, to warn their fellow chickadees. For stationary predators, these little songbirds use their namesake “chick-a-dee” alarm call. It has been shown that the number of “dee” notes at the end of this alarm call indicates the danger level of a predator.
Chickadees produce different numbers of “dee” notes at the end of their call depending on danger they have spotted (Credit: Getty Images)
A call such as “chick-a-dee-dee” with only two “dee” notes may indicate a rather harmless great grey owl. Great grey owls are too big to manoeuvre and follow the agile chickadees in woodland, so they aren’t a serious threat. In contrast, manoeuvring between trees is no problem for the small pygmy owl, which is why it is one of the most dangerous predators for these small birds. When chickadees see a pygmy owl, they increase the number of “dee” notes and call “chick-a-dee-dee-dee-dee.” Here, the number of sounds serves as an active anti-predation strategy.
Groups and group size also matter if resources cannot be defended by individuals alone – and the ability to assess the number of individuals in one’s own group relative to the opponent party is of clear adaptive value.
Several mammalian species have been investigated in the wild, and the common finding is that numerical advantage determines the outcome of such fights. In a pioneering study, zoologist Karen McComb and co-workers at the University of Sussex investigated the spontaneous behaviour of female lions at the Serengeti National Park when facing intruders. The authors exploited the fact that wild animals respond to vocalisations played through a speaker as though real individuals were present. If the playback sounds like a foreign lion that poses a threat, the lionesses would aggressively approach the speaker as the source of the enemy. In this acoustic playback study, the authors mimicked hostile intrusion by playing the roaring of unfamiliar lionesses to residents.
Two conditions were presented to subjects: either the recordings of single female lions roaring, or of groups of three females roaring together. The researchers were curious to see if the number of attackers and the number of defenders would have an impact on the defender’s strategy. Interestingly, a single defending female was very hesitant to approach the playbacks of a single or three intruders. However, three defenders readily approached the roaring of a single intruder, but not the roaring of three intruders together.
Obviously, the risk of getting hurt when entering a fight with three opponents was foreboding. Only if the number of the residents was five or more did the lionesses approach the roars of three intruders. In other words, lionesses decide to approach intruders aggressively only if they outnumber the latter – another clear example of an animal’s ability to take quantitative information into account.
Our closest cousins in the animal kingdom, the chimpanzees, show a very similar pattern of behaviour. Using a similar playback approach, Michael Wilson and colleagues from Harvard University found that the chimpanzees behaved like military strategists. They intuitively follow equations used by military forces to calculate the relative strengths of opponent parties. In particular, chimpanzees follow predictions made in Lanchester’s “square law” model of combat. This model predicts that, in contests with multiple individuals on each side, chimpanzees in this population should be willing to enter a contest only if they outnumber the opposing side by a factor of at least 1.5. And that is precisely what wild chimps do.
Lionesses judge how many intruders they may be facing before approaching them (Credit: Alamy)
Staying alive – from a biological stance – is a means to an end, and the aim is the transmission of genes. In mealworm beetles (Tenebrio molitor), many males mate with many females, and competition is intense. Therefore, a male beetle will always go for more females in order to maximise his mating opportunities. After mating, males even guard females for some time to prevent further mating acts from other males. The more rivals a male has encountered before mating, the longer he will guard the female after mating.
It is obvious that such behaviour plays an important role in reproduction and therefore has a high adaptive value. Being able to estimate quantity has improved males’ sexual competitiveness. This may in turn be a driving force for more sophisticated cognitive quantity estimation throughout evolution.
One may think that everything is won by successful copulation. But that is far from the truth for some animals, for whom the real prize is fertilising an egg. Once the individual male mating partners have accomplished their part in the play, the sperm continues to compete for the fertilisation of the egg. Since reproduction is of paramount importance in biology, sperm competition causes a variety of adaptations at the behavioural level.
In both insects and vertebrates, the males’ ability to estimate the magnitude of competition determines the size and composition of the ejaculate. In the pseudoscorpion, Cordylochernes scorpioides, for example, it is common that several males copulate with a single female. Obviously, the first male has the best chances of fertilising this female’s egg, whereas the following males face slimmer and slimmer chances of fathering offspring. However, the production of sperm is costly, so the allocation of sperm is weighed considering the chances of fertilising an egg.
Males smell the number of competitor males that have copulated with a female and adjust by progressively decreasing sperm allocation as the number of different male olfactory cues increases from zero to three.
Some bird species, meanwhile, have invented a whole arsenal of trickery to get rid of the burden of parenthood and let others do the job. Breeding a clutch and raising young are costly endeavours, after all. They become brood parasites by laying their eggs in other birds’ nests and letting the host do all the hard work of incubating eggs and feeding hatchlings. Naturally, the potential hosts are not pleased and do everything to avoid being exploited. And one of the defence strategies the potential host has at its disposal is the usage of numerical cues.
American coots, for example, sneak eggs into their neighbours’ nests and hope to trick them into raising the chicks. Of course, their neighbours try to avoid being exploited. A study in the coots’ natural habitat suggests that potential coot hosts can count their own eggs, which helps them to reject parasitic eggs. They typically lay an average-sized clutch of their own eggs, and later reject any surplus parasitic egg. Coots therefore seem to assess the number of their own eggs and ignore any others.
An even more sophisticated type of brood parasitism is found in cowbirds, a songbird species that lives in North America. In this species, females also deposit their eggs in the nests of a variety of host species, from birds as small as kinglets to those as large as meadowlarks, and they have to be smart in order to guarantee that their future young have a bright future.
Cowbird eggs hatch after exactly 12 days of incubation; if incubation is only 11 days, the chicks do not hatch and are lost. It is therefore not an accident that the incubation times for the eggs of the most common hosts range from 11 to 16 days, with an average of 12 days. Host birds usually lay one egg per day – once one day elapses with no egg added by the host to the nest, the host has begun incubation. This means the chicks start to develop in the eggs, and the clock begins ticking. For a cowbird female, it is therefore not only important to find a suitable host, but also to precisely time their egg laying appropriately. If the cowbird lays her egg too early in the host nest, she risks her egg being discovered and destroyed. But if she lays her egg too late, incubation time will have expired before her cowbird chick can hatch.
Female cowbirds perform some incredible mental arithmetic to know when she should lay her eggs in the next of a host bird (Credit: Alamy)
Clever experiments by David J White and Grace Freed-Brown from the University of Pennsylvania suggest that cowbird females carefully monitor the host’s clutch to synchronise their parasitism with a potential host’s incubation. The cowbird females watch out for host nests in which the number of eggs has increased since her first visit. This guarantees that the host is still in the laying process and incubation has not yet started. In addition, the cowbird is looking out for nests that contain exactly one additional egg per number of days that have elapsed since her initial visit.
For instance, if the cowbird female visited a nest on the first day and found one host egg in the nest, she will only deposit her own egg if the host nest contains three eggs on the third day. If the nest contains fewer additional eggs than the number of days that have passed since the last visit, she knows that incubation has already started and it is useless for her to lay her own egg. It is incredibly cognitively demanding, since the female cowbird needs to visit a nest over multiple days, remember the clutch size from one day to the next, evaluate the change in the number of eggs in the nest from a past visit to the present, assess the number of days that have passed, and then compare these values to make a decision to lay her egg or not.
But this is not all. Cowbird mothers also have sinister reinforcement strategies. They keep watch on the nests where they’ve laid their eggs. In an attempt to protect their egg, the cowbirds act like mafia gangsters. If the cowbird finds that her egg has been destroyed or removed from the host’s nest, she retaliates by destroying the host bird’s eggs, pecking holes in them or carrying them out of the nest and dropping them on the ground. The host birds better raise the cowbird nestling, or else they have to pay dearly. For the host parents, it may therefore be worth to go through all the trouble of raising a foster chick from an adaptive point of view.
The cowbird is an astounding example of how far evolution has driven some species to stay in the business of passing on their genes. The existing selection pressures, whether imposed by the inanimate environment or by other animals, force populations of species to maintain or increase adaptive traits caused by specific genes. If assessing numbers helps in this struggle to survive and reproduce, it surely is appreciated and relied on.
This explains why numerical competence is so widespread in the animal kingdom: it evolved either because it was discovered by a previous common ancestor and passed on to all descendants, or because it was invented across different branches of the animal tree of life.
Irrespective of its evolutionary origin, one thing is certain – numerical competence is most certainly an adaptive trait.
* This article originally appeared in The MIT Press Reader, and is republished under a Creative Commons licence. Andreas Nieder is Professor of Animal Physiology and Director of the Institute of Neurobiology at the University of Tübingen and the author of A Brain for Numbers, from which this article is adapted.
A simple mathematical mistake may explain why many people underestimate the dangers of coronavirus, shunning social distancing, masks and hand-washing.
Imagine you are offered a deal with your bank, where your money doubles every three days. If you invest just $1 today, roughly how long will it take for you to become a millionaire?
Would it be a year? Six months? 100 days?
The precise answer is 60 days from your initial investment, when your balance would be exactly $1,048,576. Within a further 30 days, you’d have earnt more than a billion. And by the end of the year, you’d have more than $1,000,000,000,000,000,000,000,000,000,000,000,000 – an “undecillion” dollars.
If your estimates were way out, you are not alone. Many people consistently underestimate how fast the value increases – a mistake known as the “exponential growth bias” – and while it may seem abstract, it may have had profound consequences for people’s behaviour this year.
A spate of studies has shown that people who are susceptible to the exponential growth bias are less concerned about Covid-19’s spread, and less likely to endorse measures like social distancing, hand washing or mask wearing. In other words, this simple mathematical error could be costing lives – meaning that the correction of the bias should be a priority as we attempt to flatten curves and avoid second waves of the pandemic around the world.
To understand the origins of this particular bias, we first need to consider different kinds of growth. The most familiar is “linear”. If your garden produces three apples every day, you have six after two days, nine after three days, and so on.
Exponential growth, by contrast, accelerates over time. Perhaps the simplest example is population growth; the more people you have reproducing, the faster the population grows. Or if you have a weed in your pond that triples each day, the number of plants may start out low – just three on day two, and nine on day three – but it soon escalates (see diagram, below).
Many people assume that coronavirus spreads in a linear fashion, but unchecked it’s exponential (Credit: Nigel Hawtin)
Our tendency to overlook exponential growth has been known for millennia. According to an Indian legend, the brahmin Sissa ibn Dahir was offered a prize for inventing an early version of chess. He asked for one grain of wheat to be placed on the first square on the board, two for the second square, four for the third square, doubling each time up to the 64th square. The king apparently laughed at the humility of ibn Dahir’s request – until his treasurers reported that it would outstrip all the food in the land (18,446,744,073,709,551,615 grains in total).
It was only in the late 2000s that scientists started to study the bias formally, with research showing that most people – like Sissa ibn Dahir’s king – intuitively assume that most growth is linear, leading them to vastly underestimate the speed of exponential increase.
These initial studies were primarily concerned with the consequences for our bank balance. Most savings accounts offer compound interest, for example, where you accrue additional interest on the interest you have already earned. This is a classic example of exponential growth, and it means that even low interest rates pay off handsomely over time. If you have a 5% interest rate, then £1,000 invested today will be worth £1,050 next year, and £1,102.50 the year after… which adds up to more than £7,000 in 40 years’ time. Yet most people don’t recognise how much more bang for their buck they will receive if they start investing early, so they leave themselves short for their retirement.
If the number of grains on a chess board doubled for each square, the 64th would ‘hold’ 18 quintillion (Credit: Getty Images)
Surprisingly, a higher level of education does not prevent people from making these errors. Even mathematically trained science students can be vulnerable, says Daniela Sele, who researchs economic decision making at the Swiss Federal Institute of Technology in Zurich. “It does help somewhat, but it doesn’t preclude the bias,” she says.
As I explored in my book The Intelligence Trap, intelligent and educated people often have a “bias blind spot”, believing themselves to be less susceptible to error than others – and the exponential growth bias appears to fall dead in its centre.
Most people will confidently report understanding exponential growth but then still fall for the bias
It was only this year – at the start of the Covid-19 pandemic – that researchers began to consider whether the bias might also influence our understanding of infectious diseases.
According to various epidemiological studies, without intervention the number of new Covid-19 cases doubles every three to four days, which was the reason that so many scientists advised rapid lockdowns to prevent the pandemic from spiralling out of control.
In March, Joris Lammers at the University of Bremen in Germany joined forces with Jan Crusius and Anne Gast at the University of Cologne to roll out online surveys questioning people about the potential spread of the disease. Their results showed that the exponential growth bias was prevalent in people’s understanding of the virus’s spread, with most people vastly underestimating the rate of increase. More importantly, the team found that those beliefs were directly linked to the participants’ views on the best ways to contain the spread. The worse their estimates, the less likely they were to understand the need for social distancing: the exponential growth bias had made them complacent about the official advice.
The charts that politicians show often fail to communicate exponential growth effectively (Credit: Reuters)
This chimes with other findings by Ritwik Banerjee and Priyama Majumda at the Indian Institute of Management in Bangalore, and Joydeep Bhattacharya at Iowa State University. In their study (currently under peer-review), they found susceptibility to the exponential growth bias can predict reduced compliance with the World Health Organization’s recommendations – including mask wearing, handwashing, the use of sanitisers and self-isolation.
The researchers speculate that some of the graphical representations found in the media may have been counter-productive. It’s common for the number of infections to be presented on a “logarithmic scale”, in which the figures on the y-axis increase by a power of 10 (so the gap between 1 and 10 is the same as the gap between 10 and 100, or 100 and 1000).
While this makes it easier to plot different regions with low and high growth rates, it means that exponential growth looks more linear than it really is, which could reinforce the exponential growth bias. “To expect people to use the logarithmic scale to extrapolate the growth path of a disease is to demand a very high level of cognitive ability,” the authors told me in an email. In their view, simple numerical tables may actually be more powerful.
Even a small effort to correct this bias could bring huge benefits
The good news is that people’s views are malleable. When Lammers and colleagues reminded the participants of the exponential growth bias, and asked them to calculate the growth in regular steps over a two week period, people hugely improved their estimates of the disease’s spread – and this, in turn, changed their views on social distancing. Sele, meanwhile, has recently shown that small changes in framing can matter. Emphasising the short amount of time that it will take to reach a large number of cases, for instance – and the time that would be gained by social distancing measures – improves people’s understanding of accelerating growth, rather than simply stating the percentage increase each day.
Lammers believes that the exponential nature of the virus needs to be made more salient in coverage of the pandemic. “I think this study shows how media and government should report on a pandemic in such a situation. Not only report the numbers of today and growth over the past week, but also explain what will happen in the next days, week, month, if the same accelerating growth persists,” he says.
He is confident that even a small effort to correct this bias could bring huge benefits. In the US, where the pandemic has hit hardest, it took only a few months for the virus to infect more than five million people, he says. “If we could have overcome the exponential growth bias and had convinced all Americans of this risk back in March, I am sure 99% would have embraced all possible distancing measures.”
[A ironia do autor parece indicar que ele não entendeu muito bem o assunto de que trata. Há frases inconsistentes, como “o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo”, por exemplo. RT]
O estudo nasceu baseado em um caso criminal de um rapaz chamado McArthur Wheeler que, em plena luz do dia de 19 de abril de 1995, decidiu roubar dois bancos em Pittsburg, Estados Unidos. Wheeler portava uma arma, mas não uma máscara. Câmeras de vigilância o registraram em flagrante, e a polícia divulgou sua foto nas notícias locais, recebendo várias denúncias de onde ele estava quase que imediatamente.
Um gráfico mostrando o efeito Dunning-Kruger. Imagem adaptada do Wikimedia.
Quando eles foram o prender, o Sr. Wheeler estava visivelmente confuso.
“Mas eu estava coberto de suco”, ele disse, antes que os oficiais o levassem.
Não existe “métodos infalíveis”
Em algum momento de sua vida, Wheeler aprendeu de alguém que o suco de limão poderia ser usado como uma ‘tinta invisível’. Se algo fosse escrito em um pedaço de papel usando suco de limão, você não veria nada – a não ser que você aquecesse o suco, o que tornaria os rabiscos visíveis. Então, naturalmente, ele cobriu seu rosto de suco de limão e foi assaltar um banco, confiante de que sua identidade permaneceria secreta para as câmeras, desde que ele não chegasse perto de nenhuma fonte de calor.
Ainda assim, devemos dar créditos pro sujeito: Wheeler não apostou cegamente. Ele realmente testou sua teoria tirando uma selfie com uma câmera polaroid (existe um cientista dentro de todos nós). Por alguma razão ou outra, talvez porque o filme estava com defeito, não sabemos exatamente o porquê, a câmera revelou uma imagem em branco.
As notícias circularam pelo mundo, todo mundo deu uma boa risada, e o Sr. Wheeler foi levado para a cadeia. A polícia concluiu que ele não era louco, nem usava drogas, ele realmente acreditava que seu plano funcionaria. “Durante sua interação com a polícia, ele ficou incrédulo sobre como sua ignorância havia falhado com ele”, escreveu Anupum Pant para a Awesci.
David Dunning estava trabalhando como psicólogo na Universidade Cornell na época, e a história bizarra chamou sua atenção. Com a ajuda de Justin Kruger, um de seus alunos de pós-graduação, ele começou a entender como o Sr. Wheeler podia estar tão confiante em um plano que era claramente estúpido. A teoria que eles desenvolveram é que quase todos nós consideramos nossas habilidades em determinadas áreas acima da média e que a maioria provavelmente avalia as próprias habilidades como muito melhores do que elas são objetivamente – uma “ilusão de confiança” que sustenta o efeito Dunning-Kruger.
Sejamos todos sem noção
“Cuidado com o vão”… entre como você se vê e como realmente é. Imagem via Pxfuel.
“As habilidades necessárias para produzir uma resposta certa são exatamente as habilidades necessárias para reconhecer o que é uma resposta certa”.
No estudo de 1999 (o primeiro realizado sobre o tópico), a dupla fez uma série de perguntas aos alunos de Cornell sobre gramática, lógica e humor (usadas para medir as habilidades reais dos alunos) e, em seguida, pediu que cada um avaliasse a pontuação geral que eles alcançariam e como suas pontuações se relacionariam às pontuações dos outros participantes. Eles descobriram que os estudantes com a pontuação mais baixa, superestimaram consistente e substancialmente suas próprias capacidades. Os alunos do quartil inferior (25% mais baixos por nota) pensaram que atavam acima de dois terços em média dos outros estudantes (ou seja, que ficaram entre os 33% melhores por pontuação).
Um estudo relacionado realizado pelos mesmo autores em um clube de tiro esportivo mostrou resultados semelhantes. Dunning e Kruger usaram uma metodologia semelhante, fazendo perguntas aos aficionados sobre segurança de armas, visando que estes estimassem a si próprios sobre seus desempenhos no teste. Aqueles que responderam o menor número de perguntas de forma correta também superestimaram demasiadamente seu domínio do conhecimento sobre armas de fogo.
Não é específico apenas às habilidades técnicas, pois afeta todas as esferas da existência humana por igual. Um estudo descobriu que 80% dos motoristas se classificam como acima da média, o que é literalmente impossível, porque não é assim que as médias funcionam. Tendemos a avaliar nossa popularidade relativa da mesma maneira.
Também não se limita a pessoas com habilidades baixas ou inexistentes em um determinado assunto – funciona em praticamente todos nós. Em seu primeiro estudo, Dunning e Kruger também descobriram que os alunos que pontuavam no quartil superior (25%) subestimavam rotineiramente sua própria competência.
Uma definição mais completa do efeito Dunning-Kruger seria que ele representa um viés na estimativa de nossa própria capacidade decorrente de nossa perspectiva limitada. Quando temos uma compreensão ruim ou inexistente sobre um tópico, sabemos literalmente muito pouco para entender o quão pouco sabemos. Aqueles que de fato possuem o conhecimento ou habilidades, no entanto, têm uma ideia muito melhor que as outras pessoas com quem andam. Mas eles também pensam que, se uma tarefa é clara e simples para eles, também deve ser assim para todos os outros.
Uma pessoa no primeiro grupo e uma no segundo grupo são igualmente suscetíveis de usar sua própria experiência como base e tendem a dar como certo que todos estão próximos dessa “base”. Ambos tem “ilusão de confiança” – em um, essa confiança eles tem em si mesmos, e no outro, eles tem em todos as outras pessoas.
Mas talvez não sejamos igualmente sem noção
Errar é humano. Mas, persistir com confiança no erro é hilário.
Dunning e Kruger pareciam encontrar uma saída para o efeito que ajudaram a documentar. Embora todos pareçamos ter a mesma probabilidade de nos iludir, há uma diferença importante entre aqueles que são confiantes, mas incapazes, e aqueles que são capazes e não têm confiança: a forma que lidam e absorvem o feedback ao próprio comportamento.
O Sr. Wheeler tentou verificar sua teoria. No entanto, ele olhou para uma polaroid em branco de uma foto que ele tinha acabado de tirar – um dos grandes motivos que sinalizava que algo não deu muito certo na sua teoria – e não viu motivo para se preocupar; a única explicação que ele aceitou foi que seu plano funcionava. Mais tarde, ele recebeu um feedback da polícia, mas nem isso conseguiu diminuir sua certeza; ele estava “incrédulo em como sua ignorância havia falhado com ele”, mesmo quando ele tinha absoluta confirmação (estando na prisão) de que isso falhou.
Durante sua pesquisa, Dunning e Kruger descobriram que bons alunos previam melhor seu desempenho em exames futuros quando recebessem feedback preciso sobre a pontuação que alcançaram atualmente e sobre sua classificação relativa entre a turma. Os alunos com pior desempenho não mudariam suas expectativas, mesmo após um feedback claro e repetido de que estavam tendo um desempenho ruim. Eles simplesmente insistiram que suas suposições estavam corretas.
Brincadeiras à parte, o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo. Na verdade, serve como uma precaução contra supor que estamos sempre certos e serve pra destacar a importância de manter uma mente aberta e uma visão crítica de nossa própria capacidade.
Mas se você tem medo de ser incompetente, verifique como o feedback afeta sua visão sobre seu próprio trabalho, conhecimento, habilidades e como isso se relaciona com outras pessoas ao seu redor. Se você realmente é um incompetente, não vai mudar de ideia e esse processo é basicamente uma perda de tempo, mas não se preocupe – alguém lhe dirá que você é incompetente.
So why can’t we stop such views from spreading? My opinion is that we have failed to understand their root causes, often assuming it is down to ignorance. But new research, published in my book, Knowledge Resistance: How We Avoid Insight from Others, shows that the capacity to ignore valid facts has most likely had adaptive value throughout human evolution. Therefore, this capacity is in our genes today. Ultimately, realising this is our best bet to tackle the problem.
So far, public intellectuals have roughly made two core arguments about our post-truth world. The physician Hans Rosling and the psychologist Steven Pinker argue it has come about due to deficits in facts and reasoned thinking – and can therefore be sufficiently tackled with education.
Meanwhile, Nobel Prize winner Richard Thaler and other behavioural economists have shown how the mere provision of more and better facts often lead already polarised groups to become even more polarised in their beliefs.
The conclusion of Thaler is that humans are deeply irrational, operating with harmful biases. The best way to tackle it is therefore nudging – tricking our irrational brains – for instance by changing measles vaccination from an opt-in to a less burdensome opt-out choice.
Such arguments have often resonated well with frustrated climate scientists, public health experts and agri-scientists (complaining about GMO-opposers). Still, their solutions clearly remain insufficient for dealing with a fact-resisting, polarised society.
Evolutionary pressures
In my comprehensive study, I interviewed numerous eminent academics at the University of Oxford, London School of Economics and King’s College London, about their views. They were experts on social, economic and evolutionary sciences. I analysed their comments in the context of the latest findings on topics raging from the origin of humanity, climate change and vaccination to religion and gender differences.
It became evident that much of knowledge resistance is better understood as a manifestation of social rationality. Essentially, humans are social animals; fitting into a group is what’s most important to us. Often, objective knowledge-seeking can help strengthen group bonding – such as when you prepare a well-researched action plan for your colleagues at work.
But when knowledge and group bonding don’t converge, we often prioritise fitting in over pursuing the most valid knowledge. In one large experiment, it turned out that both liberals and conservatives actively avoided having conversations with people of the other side on issues of drug policy, death penalty and gun ownership. This was the case even when they were offered a chance of winning money if they discussed with the other group. Avoiding the insights from opposing groups helped people dodge having to criticise the view of their own community.
Similarly, if your community strongly opposes what an overwhelming part of science concludes about vaccination or climate change, you often unconsciously prioritise avoiding getting into conflicts about it.
This is further backed up by research showing that the climate deniers who score the highest on scientific literacy tests are more confident than the average in that group that climate change isn’t happening – despite the evidence showing this is the case. And those among the climate concerned who score the highest on the same tests are more confident than the average in that group that climate change is happening.
This logic of prioritising the means that get us accepted and secured in a group we respect is deep. Those among the earliest humans who weren’t prepared to share the beliefs of their community ran the risk of being distrusted and even excluded.
And social exclusion was an enormous increased threat against survival – making them vulnerable to being killed by other groups, animals or by having no one to cooperate with. These early humans therefore had much lower chances of reproducing. It therefore seems fair to conclude that being prepared to resist knowledge and facts is an evolutionary, genetic adaptation of humans to the socially challenging life in hunter-gatherer societies.
Today, we are part of many groups and internet networks, to be sure, and can in some sense “shop around” for new alliances if our old groups don’t like us. Still, humanity today shares the same binary mindset and strong drive to avoid being socially excluded as our ancestors who only knew about a few groups. The groups we are part of also help shape our identity, which can make it hard to change groups. Individuals who change groups and opinions constantly may also be less trusted, even among their new peers.
In my research, I show how this matters when it comes to dealing with fact resistance. Ultimately, we need to take social aspects into account when communicating facts and arguments with various groups. This could be through using role models, new ways of framing problems, new rules and routines in our organisations and new types of scientific narratives that resonate with the intuitions and interests of more groups than our own.
There are no quick fixes, of course. But if climate change were reframed from the liberal/leftist moral perspective of the need for global fairness to conservative perspectives of respect for the authority of the father land, the sacredness of God’s creation and the individual’s right not to have their life project jeopardised by climate change, this might resonate better with conservatives.
If we take social factors into account, this would help us create new and more powerful ways to fight belief in conspiracy theories and fake news. I hope my approach will stimulate joint efforts of moving beyond disputes disguised as controversies over facts and into conversations about what often matters more deeply to us as social beings.
Tem um segundo cérebro dentro da sua barriga. Getty Images/iStockphoto
Sabe esse seu cérebro aí na cabeça? Ele não é tão único assim não como a gente imagina e conta com uma grande ajuda de um parceiro para controlar nossas emoções, nosso humor e nosso comportamento. Isso porque o corpo humano tem o que muitos chamam de um “segundo cérebro”. E em um lugar bem especial: na nossa barriga.
O “segundo cérebro”, como é chamado informalmente, está situado bem ao longo dos nove metros de seu intestino e reúne milhões de neurônios. Na verdade, faz parte de algo com uma nomenclatura um pouquinho mais complicada: o Sistema Nervoso Entérico.
Getty Images
Dentro do nosso intestino há entre 200 e 600 milhões de neurônios
Funções que até o cérebro duvida
Uma das razões principais para ele ser considerado um cérebro é a grande e complexa rede de neurônios existentes nesse sistema. Para se ter uma ideia, nós temos ali entre 200 milhões e 600 milhões de neurônios, de acordo com pesquisadores da Universidade de Melbourne, na Austrália, que trabalham em conjunto com o cérebro principal.
É como se tivéssemos o cérebro de um gato na nossa barriga. Ele tem 20 diferentes tipos de neurônios, a mesma diversidade encontrada no nosso cérebro grande, onde temos 100 bilhões de neurônios”
Heribert Watzke, cientista alimentar durante em uma palestra na TED Talks
As funções desse cérebro são várias e ocorrem de forma autônoma e integrada ao grande cérebro. Antes, imaginava-se que o cérebro maior enviava sinais para comandar esse outro cérebro, Mas, na verdade, é o contrário: o cérebro em nosso intestino envia sinais por meio de uma grande “rodovia” de neurônios para a cabeça, que pode aceitar ou não as indicações.
“O cérebro de cima pode interferir nesses sinais, modificando-os ou inibindo-os. Há sinais de fome, que nosso estômago vazio envia para o cérebro. Tem sinais que mandam a gente parar de comer quando estamos cheios. Se o sinal da fome é ignorado, pode gerar a doença anorexia, por exemplo. O mais comum é o de continuar comendo, mesmo depois que nossos sinais do estômago dizem ‘ok, pare, transferimos energia suficiente'”, complementa Watzke.
A quantidade de neurônios assusta, mas faz sentido se pensarmos nos perigos da alimentação. Assim como a pele, o intestino tem que parar imediatamente potenciais invasores perigosos em nosso organismo, como bactérias e vírus.
Esse segundo cérebro pode ativar uma diarreia ou alertar o seu “superior”, que pode decidir por acionar vômitos. É um trabalho em grupo e de vital importância.
iStock
Muito além da digestão
É claro que uma das funções principais tem a ver com a nossa digestão e excreção – como se o cérebro maior não quisesse “sujar as mãos”, né? Ele inclusive controla contrações musculares, liberação de substâncias químicas e afins. O segundo cérebro não é usado em funções como pensamentos, religião, filosofia ou poesia, mas está ligado ao nosso humor.
O sistema entérico nervoso nos ajuda a “sentir” nosso mundo interior e seu conteúdo. Segundo a revista Scientific American, é provável que boa parte das nossas emoções sejam influenciadas por causa dos neurônios em nosso intestino.
Já ouviu a expressão “borboletas no estômago”? A sensação é um exemplo disso, como uma resposta a um estresse psicológico.
É por conta disso que algumas pesquisas tentam até tratamento de depressão atuando nos neurônios do intestino. O sistema nervoso entérico tem 95% de nossa serotonina (substância conhecida como uma das responsáveis pela felicidade). Ele pode até ter um papel no autismo.
Há ainda relatos de outras doenças que possam ter a ver com esse segundo cérebro. Um estudo da Nature em 2010 apontou que modificações no funcionamento do sistema podem evitar a osteoporose.
Getty Images
Vida nas entranhas
O “segundo cérebro” tem como uma de suas principais funções a defesa do nosso corpo, já que é um dos grandes responsáveis por controlar nossos anticorpos. Um estudo de 2016 com apoio da Fapesp mostrou como os neurônios se comunicam com as células de defesa no intestino. Há até uma “conversa” com micróbios, já que o sistema nervoso ajuda a ditar quais deles podem habitar o intestino.
Pesquisas apontam que a importância do segundo cérebro é realmente enorme. Em uma delas, foi percebido que ratos recém-nascidos cujos estômagos foram expostos a um químico irritante são mais depressivos e ansiosos do que outros ratos, com os sintomas prosseguindo por um bom tempo depois do dano físico. O mesmo não ocorreu com outros danos, como uma irritação na pele.
Com tudo isso em vista, tenho certeza que você vai olhar para suas vísceras de uma maneira diferente agora, né? Pensa bem: na próxima vez que você estiver estressado ou triste e for comer aquela comida bem gorda para confortar, pode não ser culpa só da sua cabeça.
People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists has established.
People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists from Würzburg and Leipzig has established.
Imagine your best friend tells you that his girlfriend has just proposed “staying friends.” Now you have to accomplish two things: Firstly, you have to grasp that this nice sounding proposition actually means that she wants to break up with him and secondly, you should feel with your friend and comfort him.
Whether empathy and understanding other people’s mental states (mentalising) — i.e. the ability to understand what others know, plan and want — are interrelated has recently been examined by the psychologists Anne Böckler, Philipp Kanske, Mathis Trautwein, Franca Parianen-Lesemann and Tania Singer.
Anne Böckler has been a junior professor at the University of Würzburg’s Institute of Psychology since October 2015. Previously, the post-doc had worked in the Department of Social Neurosciences at the Max Planck Institute of Human Cognitive and Brain Sciences in Leipzig where she conducted the study together with her co-workers. In the scientific journal Social Cognitive and Affective Neuroscience, the scientists present the results of their work.
“Successful social interaction is based on our ability to feel with others and to understand their thoughts and intentions,” Anne Böckler explains. She says that it had been unclear previously whether and to what extend these two skills were interrelated — that is whether people who empathise easily with others are also capable of grasping their thoughts and intentions. According to the junior professor, the scientists also looked into the question of whether the neuronal networks responsible for these abilities interact.
Answers can be gleaned from the study conducted by Anne Böckler, Philipp Kanske and their colleagues at the Max Planck Institute in Leipzig within the scope of a large-scale study led by Tania Singer which included some 200 participants. The study enabled the scientists to prove that people who tend to be empathic do not necessarily understand other people well at a cognitive level. Hence, social skills seem to be based on multiple abilities that are rather independent of one another.
The study also delivered new insight as to how the different networks in the brain are orchestrated, revealing that networks crucial for empathy and cognitive perspective-taking interact with one another. In highly emotional moments — for example when somebody talks about the death of a close person — activation of the insula, which forms part of the empathy-relevant network, can have an inhibiting effect in some people on brain areas important for taking someone else’s perspective. And this in turn can cause excessive empathy to impair social understanding.
The participants to the study watched a number of video sequences in which the narrator was more or less emotional. Afterwards, they had to rate how they felt and how much compassion they felt for the person in the film. Then they had to answer questions about the video — for example what the persons could have thought, known or intended. Having thus identified persons with a high level of empathy, the psychologists looked at their portion among the test participants who had had good or poor results in the test about cognitive perspective-taking — and vice versa.
Using functional magnetic resonance imaging, the scientists observed which areas of the brain where active at what time.
The authors believe that the results of this study are important both for neuroscience and clinical applications. For example, they suggest that training aimed at improving social skills, the willingness to empathise and the ability to understand others at the cognitive level and take their perspective should be promoted selectively and separately of one another. The group in the Department of Social Neurosciences in Leipzig is currently working on exactly this topic within the scope of the ReSource project, namely how to specifically train different social skills.
Journal Reference:
Artyom Zinchenko, Philipp Kanske, Christian Obermeier, Erich Schröger, Sonja A. Kotz. Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflict. Social Cognitive and Affective Neuroscience, 2015; 10 (11): 1577 DOI: 10.1093/scan/nsv050
Autism changed Henry Markram’s family. Now his Intense World theory could transform our understanding of the condition.
SOMETHING WAS WRONG with Kai Markram. At five days old, he seemed like an unusually alert baby, picking his head up and looking around long before his sisters had done. By the time he could walk, he was always in motion and required constant attention just to ensure his safety.
“He was super active, batteries running nonstop,” says his sister, Kali. And it wasn’t just boyish energy: When his parents tried to set limits, there were tantrums—not just the usual kicking and screaming, but biting and spitting, with a disproportionate and uncontrollable ferocity; and not just at age two, but at three, four, five and beyond. Kai was also socially odd: Sometimes he was withdrawn, but at other times he would dash up to strangers and hug them.
Things only got more bizarre over time. No one in the Markram family can forget the 1999 trip to India, when they joined a crowd gathered around a snake charmer. Without warning, Kai, who was five at the time, darted out and tapped the deadly cobra on its head.
Coping with such a child would be difficult for any parent, but it was especially frustrating for his father, one of the world’s leading neuroscientists. Henry Markram is the man behind Europe’s $1.3 billion Human Brain Project, a gargantuan research endeavor to build a supercomputer model of the brain. Markram knows as much about the inner workings of our brains as anyone on the planet, yet he felt powerless to tackle Kai’s problems.
“As a father and a neuroscientist, you realize that you just don’t know what to do,” he says. In fact, Kai’s behavior—which was eventually diagnosed as autism—has transformed his father’s career, and helped him build a radical new theory of autism: one that upends the conventional wisdom. And, ironically, his sideline may pay off long before his brain model is even completed.
IMAGINE BEING BORN into a world of bewildering, inescapable sensory overload, like a visitor from a much darker, calmer, quieter planet. Your mother’s eyes: a strobe light. Your father’s voice: a growling jackhammer. That cute little onesie everyone thinks is so soft? Sandpaper with diamond grit. And what about all that cooing and affection? A barrage of chaotic, indecipherable input, a cacophony of raw, unfilterable data.
Just to survive, you’d need to be excellent at detecting any pattern you could find in the frightful and oppressive noise. To stay sane, you’d have to control as much as possible, developing a rigid focus on detail, routine and repetition. Systems in which specific inputs produce predictable outputs would be far more attractive than human beings, with their mystifying and inconsistent demands and their haphazard behavior.
This, Markram and his wife, Kamila, argue, is what it’s like to be autistic.
They call it the “intense world” syndrome.
The behavior that results is not due to cognitive deficits—the prevailing view in autism research circles today—but the opposite, they say. Rather than being oblivious, autistic people take in too much and learn too fast. While they may appear bereft of emotion, the Markrams insist they are actually overwhelmed not only by their own emotions, but by the emotions of others.
Consequently, the brain architecture of autism is not just defined by its weaknesses, but also by its inherent strengths. The developmental disorder now believed to affect around 1 percent of the population is not characterized by lack of empathy, the Markrams claim. Social difficulties and odd behavior result from trying to cope with a world that’s just too much.
After years of research, the couple came up with their label for the theory during a visit to the remote area where Henry Markram was born, in the South African part of the Kalahari desert. He says “intense world” was Kamila’s phrase; she says she can’t recall who hit upon it. But he remembers sitting in the rust-colored dunes, watching the unusual swaying yellow grasses while contemplating what it must be like to be inescapably flooded by sensation and emotion.
That, he thought, is what Kai experiences. The more he investigated the idea of autism not as a deficit of memory, emotion and sensation, but an excess, the more he realized how much he himself had in common with his seemingly alien son.
HENRY MARKRAM IS TALL, with intense blue eyes, sandy hair and the air of unmistakable authority that goes with the job of running a large, ambitious, well-funded research project. It’s hard to see what he might have in common with a troubled, autistic child. He rises most days at 4 a.m. and works for a few hours in his family’s spacious apartment in Lausanne before heading to the institute, where the Human Brain Project is based. “He sleeps about four or five hours,” says Kamila. “That’s perfect for him.”
As a small child, Markram says, he “wanted to know everything.” But his first few years of high school were mostly spent “at the bottom of the F class.” A Latin teacher inspired him to pay more attention to his studies, and when a beloved uncle became profoundly depressed and died young—he was only in his 30s, but “just went downhill and gave up”—Markram turned a corner. He’d recently been given an assignment about brain chemistry, which got him thinking. “If chemicals and the structure of the brain can change and then I change, who am I? It’s a profound question. So I went to medical school and wanted to become a psychiatrist.”
Markram attended the University of Cape Town, but in his fourth year of medical school, he took a fellowship in Israel. “It was like heaven,” he says, “It was all the toys that I ever could dream of to investigate the brain.” He never returned to med school, and married his first wife, Anat, an Israeli, when he was 26. Soon, they had their first daughter, Linoy, now 24, then a second, Kali, now 23. Kai came four years afterwards.
During graduate research at the Weizmann Institute in Israel, Markram made his first important discovery, elucidating a key relationship between two neurotransmitters involved in learning, acetylcholine and glutamate. The work was important and impressive—especially so early in a scientist’s career—but it was what he did next that really made his name.
During a postdoc with Nobel laureate Bert Sakmann at Germany’s Max Planck Institute, Markram showed how brain cells that “fire together, wire together.” That had been a basic tenet of neuroscience since the 1940s—but no one had been able to figure out how the process actually worked.
By studying the precise timing of electrical signaling between neurons, Markram demonstrated that firing in specific patterns increases the strength of the synapses linking cells, while missing the beat weakens them. This simple mechanism allows the brain to learn, forging connections both literally and figuratively between various experiences and sensations—and between cause and effect.
Measuring these fine temporal distinctions was also a technical triumph. Sakmann won his 1991 Nobel for developing the required “patch clamp” technique, which measures the tiny changes in electrical activity inside nerve cells. To patch just one neuron, you first harvest a sliver of brain, about 1/3 of a millimeter thick and containing around 6 million neurons, typically from a freshly guillotined rat.
To keep the tissue alive, you bubble it in oxygen, and bathe the slice of brain in a laboratory substitute for cerebrospinal fluid. Under a microscope, using a minuscule glass pipette, you carefully pierce a single cell. The technique is similar to injecting a sperm into an egg for in vitro fertilization—except that neurons are hundreds of times smaller than eggs.
It requires steady hands and exquisite attention to detail. Markram’s ultimate innovation was to build a machine that could study 12 such carefully prepared cells simultaneously, measuring their electrical and chemical interactions. Researchers who have done it say you can sometimes go a whole day without getting one right—but Markram became a master.
Still, there was a problem. He seemed to go from one career peak to another—a Fulbright at the National Institutes of Health, tenure at Weizmann, publication in the most prestigious journals—but at the same time it was becoming clear that something was not right in his youngest child’s head. He studied the brain all day, but couldn’t figure out how to help Kai learn and cope. As he told a New York Times reporter earlier this year, “You know how powerless you feel. You have this child with autism and you, even as a neuroscientist, really don’t know what to do.”
AT FIRST, MARKRAM THOUGHT Kai had attention deficit/ hyperactivity disorder (ADHD): Once Kai could move, he never wanted to be still. “He was running around, very difficult to control,” Markram says. As Kai grew, however, he began melting down frequently, often for no apparent reason. “He became more particular, and he started to become less hyperactive but more behaviorally difficult,” Markram says. “Situations were very unpredictable. He would have tantrums. He would be very resistant to learning and to any kind of instruction.”
Preventing Kai from harming himself by running into the street or following other capricious impulses was a constant challenge. Even just trying to go to the movies became an ordeal: Kai would refuse to enter the cinema or hold his hands tightly over his ears.
However, Kai also loved to hug people, even strangers, which is one reason it took years to get a diagnosis. That warmth made many experts rule out autism. Only after multiple evaluations was Kai finally diagnosed with Asperger syndrome, a type of autism that includes social difficulties and repetitive behaviors, but not lack of speech or profound intellectual disability.
“We went all over the world and had him tested, and everybody had a different interpretation,” Markram says. As a scientist who prizes rigor, this infuriated him. He’d left medical school to pursue neuroscience because he disliked psychiatry’s vagueness. “I was very disappointed in how psychiatry operates,” he says.
Over time, trying to understand Kai became Markram’s obsession.
It drove what he calls his “impatience” to model the brain: He felt neuroscience was too piecemeal and could not progress without bringing more data together. “I wasn’t satisfied with understanding fragments of things in the brain; we have to understand everything,” he says. “Every molecule, every gene, every cell. You can’t leave anything out.”
This impatience also made him decide to study autism, beginning by reading every study and book he could get his hands on. At the time, in the 1990s, the condition was getting increased attention. The diagnosis had only been introduced into the psychiatric bible, then the DSM III, in 1980. The 1988 Dustin Hoffman film Rain Man, about an autistic savant, brought the idea that autism was both a disability and a source of quirky intelligence into the popular imagination.
The dark days of the mid–20th century, when autism was thought to be caused by unloving “refrigerator mothers” who icily rejected their infants, were long past. However, while experts now agree that the condition is neurological, its causes remain unknown.
The most prominent theory suggests that autism results from problems with the brain’s social regions, which results in a deficit of empathy. This “theory of mind” concept was developed by Uta Frith, Alan Leslie, and Simon Baron-Cohen in the 1980s. They found that autistic children are late to develop the ability to distinguish between what they know themselves and what others know—something that other children learn early on.
In a now famous experiment, children watched two puppets, “Sally” and “Anne.” Sally has a marble, which she places in a basket and then leaves. While she’s gone, Anne moves Sally’s marble into a box. By age four or five, normal children can predict that Sally will look for the marble in the basket first because she doesn’t know that Anne moved it. But until they are much older, most autistic children say that Sally will look in the box because they know it’s there. While typical children automatically adopt Sally’s point of view and know she was out of the room when Anne hid the marble, autistic children have much more difficulty thinking this way.
The researchers linked this “mind blindness”—a failure of perspective-taking—to their observation that autistic children don’t engage in make-believe. Instead of pretending together, autistic children focus on objects or systems—spinning tops, arranging blocks, memorizing symbols, or becoming obsessively involved with mechanical items like trains and computers.
This apparent social indifference was viewed as central to the condition. Unfortunately, the theory also seemed to imply that autistic people are uncaring because they don’t easily recognize that other people exist as intentional agents who can be loved, thwarted or hurt. But while the Sally-Anne experiment shows that autistic people have difficulty knowing that other people have different perspectives—what researchers call cognitive empathy or “theory of mind”—it doesn’t show that they don’t care when someone is hurt or feeling pain, whether emotional or physical. In terms of caring—technically called affective empathy—autistic people aren’t necessarily impaired.
Sadly, however, the two different kinds of empathy are combined in one English word. And so, since the 1980s, this idea that autistic people “lack empathy” has taken hold.
“When we looked at the autism field we couldn’t believe it,” Markram says. “Everybody was looking at it as if they have no empathy, no theory of mind. And actually Kai, as awkward as he was, saw through you. He had a much deeper understanding of what really was your intention.” And he wanted social contact.
The obvious thought was: Maybe Kai’s not really autistic? But by the time Markram was fully up to speed in the literature, he was convinced that Kai had been correctly diagnosed. He’d learned enough to know that the rest of his son’s behavior was too classically autistic to be dismissed as a misdiagnosis, and there was no alternative condition that explained as much of his behavior and tendencies. And accounts by unquestionably autistic people, like bestselling memoirist and animal scientist Temple Grandin, raised similar challenges to the notion that autistic people could never really see beyond themselves.
Markram began to do autism work himself as visiting professor at the University of California, San Francisco in 1999. Colleague Michael Merzenich, a neuroscientist, proposed that autism is caused by an imbalance between inhibitory and excitatory neurons. A failure of inhibitions that tamp down impulsive actions might explain behavior like Kai’s sudden move to pat the cobra. Markram started his research there.
MARKRAM MET HIS second wife, Kamila Senderek, at a neuroscience conference in Austria in 2000. He was already separated from Anat. “It was love at first sight,” Kamila says.
Her parents left communist Poland for West Germany when she was five. When she met Markram, she was pursuing a master’s in neuroscience at the Max Planck Institute. When Markram moved to Lausanne to start the Human Brain Project, she began studying there as well.
Tall like her husband, with straight blonde hair and green eyes, Kamila wears a navy twinset and jeans when we meet in her open-plan office overlooking Lake Geneva. There, in addition to autism research, she runs the world’s fourth largest open-access scientific publishing firm, Frontiers, with a network of over 35,000 scientists serving as editors and reviewers. She laughs when I observe a lizard tattoo on her ankle, a remnant of an adolescent infatuation with The Doors.
When asked whether she had ever worried about marrying a man whose child had severe behavioral problems, she responds as though the question never occurred to her. “I knew about the challenges with Kai,” she says, “Back then, he was quite impulsive and very difficult to steer.”
The first time they spent a day together, Kai was seven or eight. “I probably had some blue marks and bites on my arms because he was really quite something. He would just go off and do something dangerous, so obviously you would have to get in rescue mode,” she says, noting that he’d sometimes walk directly into traffic. “It was difficult to manage the behavior,” she shrugs, “But if you were nice with him then he was usually nice with you as well.”
“Kamila was amazing with Kai,” says Markram, “She was much more systematic and could lay out clear rules. She helped him a lot. We never had that thing that you see in the movies where they don’t like their stepmom.”
At the Swiss Federal Institute of Technology in Lausanne (EPFL), the couple soon began collaborating on autism research. “Kamila and I spoke about it a lot,” Markram says, adding that they were both “frustrated” by the state of the science and at not being able to help more. Their now-shared parental interest fused with their scientific drives.
They started by studying the brain at the circuitry level. Markram assigned a graduate student, Tania Rinaldi Barkat, to look for the best animal model, since such research cannot be done on humans.
Barkat happened to drop by Kamila’s office while I was there, a decade after she had moved on to other research. She greeted her former colleagues enthusiastically.
She started her graduate work with the Markrams by searching the literature for prospective animal models. They agreed that the one most like human autism involved rats prenatally exposed to an epilepsy drug called valproic acid (VPA; brand name, Depakote). Like other “autistic” rats, VPA rats show aberrant social behavior and increased repetitive behaviors like excessive self-grooming.
But more significant is that when pregnant women take high doses of VPA, which is sometimes necessary for seizure control, studies have found that the risk of autism in their children increases sevenfold. One 2005 study found that close to 9 percent of these children have autism.
Because VPA has a link to human autism, it seemed plausible that its cellular effects in animals would be similar. A neuroscientist who has studied VPA rats once told me, “I see it not as a model, but as a recapitulation of the disease in other species.”
Barkat got to work. Earlier research showed that the timing and dose of exposure was critical: Different timing could produce opposite symptoms, and large doses sometimes caused physical deformities. The “best” time to cause autistic symptoms in rats is embryonic day 12, so that’s when Barkat dosed them.
At first, the work was exasperating. For two years, Barkat studied inhibitory neurons from the VPA rat cortex, using the same laborious patch-clamping technique perfected by Markram years earlier. If these cells were less active, that would confirm the imbalance that Merzenich had theorized.
She went through the repetitious preparation, making delicate patches to study inhibitory networks. But after two years of this technically demanding, sometimes tedious, and time-consuming work, Barkat had nothing to show for it.
“I just found no difference at all,” she told me, “It looked completely normal.” She continued to patch cell after cell, going through the exacting procedure endlessly—but still saw no abnormalities. At least she was becoming proficient at the technique, she told herself.
Markram was ready to give up, but Barkat demurred, saying she would like to shift her focus from inhibitory to excitatory VPA cell networks. It was there that she struck gold.
“There was a difference in the excitability of the whole network,” she says, reliving her enthusiasm. The networked VPA cells responded nearly twice as strongly as normal—and they were hyper-connected. If a normal cell had connections to ten other cells, a VPA cell connected with twenty. Nor were they under-responsive. Instead, they were hyperactive, which isn’t necessarily a defect: A more responsive, better-connected network learns faster.
But what did this mean for autistic people? While Barkat was investigating the cortex, Kamila Markram had been observing the rats’ behavior, noting high levels of anxiety as compared to normal rats. “It was pretty much a gold mine then,” Markram says. The difference was striking. “You could basically see it with the eye. The VPAs were different and they behaved differently,” Markram says. They were quicker to get frightened, and faster at learning what to fear, but slower to discover that a once-threatening situation was now safe.
While ordinary rats get scared of an electrified grid where they are shocked when a particular tone sounds, VPA rats come to fear not just that tone, but the whole grid and everything connected with it—like colors, smells, and other clearly distinguishable beeps.
“The fear conditioning was really hugely amplified,” Markram says. “We then looked at the cell response in the amygdala and again they were hyper-reactive, so it made a beautiful story.”
THE MARKRAMS RECOGNIZED the significance of their results. Hyper-responsive sensory, memory and emotional systems might explain both autistic talents and autistic handicaps, they realized. After all, the problem with VPA rats isn’t that they can’t learn—it’s that they learn too quickly, with too much fear, and irreversibly.
They thought back to Kai’s experiences: how he used to cover his ears and resist going to the movies, hating the loud sounds; his limited diet and apparent terror of trying new foods.
“He remembers exactly where he sat at exactly what restaurant one time when he tried for hours to get himself to eat a salad,” Kamila says, recalling that she’d promised him something he’d really wanted if he did so. Still, he couldn’t make himself try even the smallest piece of lettuce. That was clearly overgeneralization of fear.
The Markrams reconsidered Kai’s meltdowns, too, wondering if they’d been prompted by overwhelming experiences. They saw that identifying Kai’s specific sensitivities preemptively might prevent tantrums by allowing him to leave upsetting situations or by mitigating his distress before it became intolerable. The idea of an intense world had immediate practical implications.
The amygdala.
The VPA data also suggested that autism isn’t limited to a single brain network. In VPA rat brains, both the amygdala and the cortex had proved hyper-responsive to external stimuli. So maybe, the Markrams decided, autistic social difficulties aren’t caused by social-processing defects; perhaps they are the result of total information overload.
CONSIDER WHAT IT MIGHT FEEL like to be a baby in a world of relentless and unpredictable sensation. An overwhelmed infant might, not surprisingly, attempt to escape. Kamila compares it to being sleepless, jetlagged, and hung over, all at once. “If you don’t sleep for a night or two, everything hurts. The lights hurt. The noises hurt. You withdraw,” she says.
Unlike adults, however, babies can’t flee. All they can do is cry and rock, and, later, try to avoid touch, eye contact, and other powerful experiences. Autistic children might revel in patterns and predictability just to make sense of the chaos.
At the same time, if infants withdraw to try to cope, they will miss what’s known as a “sensitive period”—a developmental phase when the brain is particularly responsive to, and rapidly assimilates, certain kinds of external stimulation. That can cause lifelong problems.
Language learning is a classic example: If babies aren’t exposed to speech during their first three years, their verbal abilities can be permanently stunted. Historically, this created a spurious link between deafness and intellectual disability: Before deaf babies were taught sign language at a young age, they would often have lasting language deficits. Their problem wasn’t defective “language areas,” though—it was that they had been denied linguistic stimuli at a critical time. (Incidentally, the same phenomenon accounts for why learning a second language is easy for small children and hard for virtually everyone else.)
This has profound implications for autism. If autistic babies tune out when overwhelmed, their social and language difficulties may arise not from damaged brain regions, but because critical data is drowned out by noise or missed due to attempts to escape at a time when the brain actually needs this input.
The intense world could also account for the tragic similarities between autistic children and abused and neglected infants. Severely maltreated children often rock, avoid eye contact, and have social problems—just like autistic children. These parallels led to decades of blaming the parents of autistic children, including the infamous “refrigerator mother.” But if those behaviors are coping mechanisms, autistic people might engage in them not because of maltreatment, but because ordinary experience is overwhelming or even traumatic.
The Markrams teased out further implications: Social problems may not be a defining or even fixed feature of autism. Early intervention to reduce or moderate the intensity of an autistic child’s environment might allow their talents to be protected while their autism-related disabilities are mitigated or, possibly, avoided.
The VPA model also captures other paradoxical autistic traits. For example, while oversensitivities are most common, autistic people are also frequently under-reactive to pain. The same is true of VPA rats. In addition, one of the most consistent findings in autism is abnormal brain growth, particularly in the cortex. There, studies find an excess of circuits called mini-columns, which can be seen as the brain’s microprocessors. VPA rats also exhibit this excess.
Moreover, extra minicolumns have been found in autopsies of scientists who were not known to be autistic, suggesting that this brain organization can appear without social problems and alongside exceptional intelligence.
Like a high-performance engine, the autistic brain may only work properly under specific conditions. But under those conditions, such machines can vastly outperform others—like a Ferrari compared to a Ford.
THE MARKRAMS’ FIRST PUBLICATION of their intense world research appeared in 2007: a paper on the VPA rat in the Proceedings of the National Academy of Sciences. This was followed by an overview in Frontiers in Neuroscience. The next year, at the Society for Neuroscience (SFN), the field’s biggest meeting, a symposium was held on the topic. In 2010, they updated and expanded their ideas in a second Frontiers paper.
Since then, more than three dozen papers have been published by other groups on VPA rodents, replicating and extending the Markrams’ findings. At this year’s SFN, at least five new studies were presented on VPA autism models. The sensory aspects of autism have long been neglected, but the intense world and VPA rats are bringing it to the fore.
Nevertheless, reaction from colleagues in the field has been cautious. One exception is Laurent Mottron, professor of psychiatry and head of autism research at the University of Montreal. He was the first to highlight perceptual differences as critical in autism—even before the Markrams. Only a minority of researchers even studied sensory issues before him. Almost everyone else focused on social problems.
But when Mottron first proposed that autism is linked with what he calls “enhanced perceptual functioning,” he, like most experts, viewed this as the consequence of a deficit. The idea was that the apparently superior perception exhibited by some autistic people is caused by problems with higher level brain functioning—and it had historically been dismissed as mere“splinter skills,” not a sign of genuine intelligence. Autistic savants had earlier been known as “idiot savants,” the implication being that, unlike “real” geniuses, they didn’t have any creative control of their exceptional minds. Mottron described it this way in a review paper: “[A]utistics were not displaying atypical perceptual strengths but a failure to form global or high level representations.”
However, Mottron’s research led him to see this view as incorrect. His own and other studies showed superior performance by autistic people not only in “low level” sensory tasks, like better detection of musical pitch and greater ability to perceive certain visual information, but also in cognitive tasks like pattern finding in visual IQ tests.
In fact, it has long been clear that detecting and manipulating complex systems is an autistic strength—so much so that the autistic genius has become a Silicon Valley stereotype. In May, for example, the German software firm SAP announced plans to hire 650 autistic people because of their exceptional abilities. Mathematics, musical virtuosity, and scientific achievement all require understanding and playing with systems, patterns, and structure. Both autistic people and their family members are over-represented in these fields, which suggests genetic influences.
“Our points of view are in different areas [of research,] but we arrive at ideas that are really consistent,” says Mottron of the Markrams and their intense world theory. (He also notes that while they study cell physiology, he images actual human brains.)
Because Henry Markram came from outside the field and has an autistic son, Mottron adds, “He could have an original point of view and not be influenced by all the clichés,” particularly those that saw talents as defects. “I’m very much in sympathy with what they do,” he says, although he is not convinced that they have proven all the details.
Mottron’s support is unsurprising, of course, because the intense world dovetails with his own findings. But even one of the creators of the “theory of mind” concept finds much of it plausible.
Simon Baron-Cohen, who directs the Autism Research Centre at Cambridge University, told me, “I am open to the idea that the social deficits in autism—like problems with the cognitive aspects of empathy, which is also known as ‘theory of mind’—may be upstream from a more basic sensory abnormality.” In other words, the Markrams’ physiological model could be the cause, and the social deficits he studies, the effect. He adds that the VPA rat is an “interesting” model. However, he also notes that most autism is not caused by VPA and that it’s possible that sensory and social defects co-occur, rather than one causing the other.
His collaborator, Uta Frith, professor of cognitive development at University College London, is not convinced. “It just doesn’t do it for me,” she says of the intense world theory. “I don’t want to say it’s rubbish,” she says, “but I think they try to explain too much.”
AMONG AFFECTED FAMILIES, by contrast, the response has often been rapturous. “There are elements of the intense world theory that better match up with autistic experience than most of the previously discussed theories,” says Ari Ne’eman, president of the Autistic Self Advocacy Network, “The fact that there’s more emphasis on sensory issues is very true to life.” Ne’eman and other autistic people fought to get sensory problems added to the diagnosis in DSM-5 — the first time the symptoms have been so recognized, and another sign of the growing receptiveness to theories like intense world.
Steve Silberman, who is writing a history of autism titled NeuroTribes: Thinking Smarter About People Who Think Differently, says, “We had 70 years of autism research [based] on the notion that autistic people have brain deficits. Instead, the intense world postulates that autistic people feel too much and sense too much. That’s valuable, because I think the deficit model did tremendous injury to autistic people and their families, and also misled science.”
Priscilla Gilman, the mother of an autistic child, is also enthusiastic. Her memoir, The Anti-Romantic Child, describes her son’s diagnostic odyssey. Before Benjamin was in preschool, Gilman took him to the Yale Child Study Center for a full evaluation. At the time, he did not display any classic signs of autism, but he did seem to be a candidate for hyperlexia—at age two-and-a-half, he could read aloud from his mother’s doctoral dissertation with perfect intonation and fluency.Like other autistic talents, hyperlexia is often dismissed as a “splinter” strength.
At that time, Yale experts ruled autism out, telling Gilman that Benjamin “is not a candidate because he is too ‘warm’ and too ‘related,’” she recalls. Kai Markram’s hugs had similarly been seen as disqualifying. At twelve years of age, however, Benjamin was officially diagnosed with Autism Spectrum Disorder.
According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.
Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.
“The overwhelmingness of understanding how people feel can lead to either what is perceived as inappropriate emotional response, or to what is perceived as shutting down, which people see as lack of empathy,” says Emily Willingham. Willingham is a biologist and the mother of an autistic child; she also suspects that she herself has Asperger syndrome. But rather than being unemotional, she says, autistic people are “taking it all in like a tsunami of emotion that they feel on behalf of others. Going internal is protective.”
At least one study supports this idea, showing that while autistic people score lower on cognitive tests of perspective-taking—recall Anne, Sally, and the missing marble—they are more affected than typical folks by other people’s feelings. “I have three children, and my autistic child is my most empathetic,” Priscilla Gilman says, adding that when her mother first read about the intense world, she said, “This explains Benjamin.”
Benjamin’s hypersensitivities are also clearly linked to his superior perception. “He’ll sometimes say, ‘Mommy, you’re speaking in the key of D, could you please speak in the key of C? It’s easier for me to understand you and pay attention.”
Because he has musical training and a high IQ, Benjamin can use his own sense of “absolute pitch”—the ability to name a note without hearing another for comparison—to define the problem he’s having. But many autistic people can’t verbalize their needs like this. Kai, too, is highly sensitive to vocal intonation, preferring his favorite teacher because, he explains, she “speaks soft,” even when she’s displeased. But even at 19, he isn’t able to articulate the specifics any better than that.
ON A RECENT VISIT to Lausanne, Kai wears a sky blue hoodie, his gray Chuck Taylor–style sneakers carefully unlaced at the top. “My rapper sneakers,” he says, smiling. He speaks Hebrew and English and lives with his mother in Israel, attending a school for people with learning disabilities near Rehovot. His manner is unselfconscious, though sometimes he scowls abruptly without explanation. But when he speaks, it is obvious that he wants to connect, even when he can’t answer a question. Asked if he thinks he sees things differently than others do, he says, “I feel them different.”
He waits in the Markrams’ living room as they prepare to take him out for dinner. Henry’s aunt and uncle are here, too. They’ve been living with the family to help care for its newest additions: nine-month-old Charlotte and Olivia, who is one-and-a-half years old.
“It’s our big patchwork family,” says Kamila, noting that when they visit Israel, they typically stay with Henry’s ex-wife’s family, and that she stays with them in Lausanne. They all travel constantly, which has created a few problems now and then. None of them will ever forget a tantrum Kai had when he was younger, which got him barred from a KLM flight. A delay upset him so much that he kicked, screamed, and spat.
Now, however, he rarely melts down. A combination of family and school support, an antipsychotic medication that he’s been taking recently, and increased understanding of his sensitivities has mitigated the disabilities Kai associated with his autism.
“I was a bad boy. I always was hitting and doing a lot of trouble,” Kai says of his past. “I was really bad because I didn’t know what to do. But I grew up.” His relatives nod in agreement. Kai has made tremendous strides, though his parents still think that his brain has far greater capacity than is evident in his speech and schoolwork.
As the Markrams see it, if autism results from a hyper-responsive brain, the most sensitive brains are actually the most likely to be disabled by our intense world. But if autistic people can learn to filter the blizzard of data, especially early in life, then those most vulnerable to the most severe autism might prove to be the most gifted of all.
Markram sees this in Kai. “It’s not a mental retardation,” he says, “He’s handicapped, absolutely, but something is going crazy in his brain. It’s a hyper disorder. It’s like he’s got an amplification of many of my quirks.”
One of these involves an insistence on timeliness. “If I say that something has to happen,” he says, “I can become quite difficult. It has to happen at that time.”
He adds, “For me it’s an asset, because it means that I deliver. If I say I’ll do something, I do it.” For Kai, however, anticipation and planning run wild. When he travels, he obsesses about every move, over and over, long in advance. “He will sit there and plan, okay, when he’s going to get up. He will execute. You know he will get on that plane come hell or high water,” Markram says. “But he actually loses the entire day. It’s like an extreme version of my quirks, where for me they are an asset and for him they become a handicap.”
If this is true, autistic people have incredible unrealized potential. Say Kai’s brain was even more finely tuned than his father’s, then it might give him the capacity to be even more brilliant. Consider Markram’s visual skills. Like Temple Grandin, whose first autism memoir was titled Thinking In Pictures, he has stunning visual abilities. “I see what I think,” he says, adding that when he considers a scientific or mathematical problem, “I can see how things are supposed to look. If it’s not there, I can actually simulate it forward in time.”
At the offices of Markram’s Human Brain Project, visitors are given a taste of what it might feel like to inhabit such a mind. In a small screening room furnished with sapphire-colored, tulip-shaped chairs, I’m handed 3-D glasses. The instant the lights dim, I’m zooming through a brightly colored forest of neurons so detailed and thick that they appear to be velvety, inviting to the touch.
The simulation feels so real and enveloping that it is hard to pay attention to the narration, which includes mind-blowing facts about the project. But it is also dizzying, overwhelming. If this is just a smidgen of what ordinary life is like for Kai it’s easier to see how hard his early life must have been. That’s the paradox about autism and empathy. The problem may not be that autistic people can’t understand typical people’s points of view—but that typical people can’t imagine autism.
Critics of the intense world theory are dismayed and put off by this idea of hidden talent in the most severely disabled. They see it as wishful thinking, offering false hope to parents who want to see their children in the best light and to autistic people who want to fight the stigma of autism. In some types of autism, they say, intellectual disability is just that.
“The maxim is, ‘If you’ve seen one person with autism, you’ve seen one person with autism,’” says Matthew Belmonte, an autism researcher affiliated with the Groden Center in Rhode Island. The assumption should be that autistic people have intelligence that may not be easily testable, he says, but it can still be highly variable.
He adds, “Biologically, autism is not a unitary condition. Asking at the biological level ‘What causes autism?’ makes about as much sense as asking a mechanic ‘Why does my car not start?’ There are many possible reasons.” Belmonte believes that the intense world may account for some forms of autism, but not others.
Kamila, however, insists that the data suggests that the most disabled are also the most gifted. “If you look from the physiological or connectivity point of view, those brains are the most amplified.”
The question, then, is how to unleash that potential.
“I hope we give hope to others,” she says, while acknowledging that intense-world adherents don’t yet know how or even if the right early intervention can reduce disability.
The secret-ability idea also worries autistic leaders like Ne’eman, who fear that it contains the seeds of a different stigma. “We agree that autistic people do have a number of cognitive advantages and it’s valuable to do research on that,” he says. But, he stresses, “People have worth regardless of whether they have special abilities. If society accepts us only because we can do cool things every so often, we’re not exactly accepted.”
The MARKRAMS ARE NOW EXPLORING whether providing a calm, predictable early environment—one aimed at reducing overload and surprise—can help VPA rats, soothing social difficulties while nurturing enhanced learning. New research suggests that autism can be detected in two-month-old babies, so the treatment implications are tantalizing.
So far, Kamila says, the data looks promising. Unexpected novelty seems to make the rats worse—while the patterned, repetitive, and safe introduction of new material seems to cause improvement.
In humans, the idea would be to keep the brain’s circuitry calm when it is most vulnerable, during those critical periods in infancy and toddlerhood. “With this intensity, the circuits are going to lock down and become rigid,” says Markram. “You want to avoid that, because to undo it is very difficult.”
For autistic children, intervening early might mean improvements in learning language and socializing. While it’s already clear that early interventions can reduce autistic disability, they typically don’t integrate intense-world insights. The behavioral approach that is most popular—Applied Behavior Analysis—rewards compliance with “normal” behavior, rather than seeking to understand what drives autistic actions and attacking the disabilities at their inception.
Research shows, in fact, that everyone learns best when receiving just the right dose of challenge—not so little that they’re bored, not so much that they’re overwhelmed; not in the comfort zone, and not in the panic zone, either. That sweet spot may be different in autism. But according to the Markrams, it is different in degree, not kind.
Markram suggests providing a gentle, predictable environment. “It’s almost like the fourth trimester,” he says.
“To prevent the circuits from becoming locked into fearful states or behavioral patterns you need a filtered environment from as early as possible,” Markram explains. “I think that if you can avoid that, then those circuits would get locked into having the flexibility that comes with security.”
Creating this special cocoon could involve using things like headphones to block excess noise, gradually increasing exposure and, as much as possible, sticking with routines and avoiding surprise. If parents and educators get it right, he concludes, “I think they’ll be geniuses.”
IN SCIENCE, CONFIRMATION BIAS is always the unseen enemy. Having a dog in the fight means you may bend the rules to favor it, whether deliberately or simply because we’re wired to ignore inconvenient truths. In fact, the entire scientific method can be seen as a series of attempts to drive out bias: The double-blind controlled trial exists because both patients and doctors tend to see what they want to see—improvement.
At the same time, the best scientists are driven by passions that cannot be anything but deeply personal. The Markrams are open about the fact that their subjective experience with Kai influences their work.
But that doesn’t mean that they disregard the scientific process. The couple could easily deal with many of the intense world critiques by simply arguing that their theory only applies to some cases of autism. That would make it much more difficult to disprove. But that’s not the route they’ve chosen to take. In their 2010 paper, they list a series of possible findings that would invalidate the intense world, including discovering human cases where the relevant brain circuits are not hyper-reactive, or discovering that such excessive responsiveness doesn’t lead to deficiencies in memory, perception, or emotion. So far, however, the known data has been supportive.
But whether or not the intense world accounts for all or even most cases of autism, the theory already presents a major challenge to the idea that the condition is primarily a lack of empathy, or a social disorder. Intense world theory confronts the stigmatizing stereotypes that have framed autistic strengths as defects, or at least as less significant because of associated weaknesses.
And Henry Markram, by trying to take his son Kai’s perspective—and even by identifying so closely with it—has already done autistic people a great service, demonstrating the kind of compassion that people on the spectrum are supposed to lack. If the intense world does prove correct, we’ll all have to think about autism, and even about typical people’s reactions to the data overload endemic in modern life, very differently.
From left: Kamila, Henry, Kai, and Anat
This story was written by Maia Szalavitz, edited by Mark Horowitz, fact-checked by Kyla Jones, and copy-edited by Tim Heffernan, with photography by Darrin Vanselow and an audiobook narrated by Jack Stewart.
Você precisa fazer login para comentar.