Arquivo da tag: Cognição

Greater than the sum of our parts: The evolution of collective intelligence (EurekaAlert!)

News Release 15-Jun-2021

University of Cambridge

Research News

The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.

New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.

Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”

The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.

Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”

Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.

The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.

Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”

Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”

At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.

Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.

Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”

“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”

Human Brain Limit of ‘150 Friends’ Doesn’t Check Out, New Study Claims (Science Alert)

Peter Dockrill – 5 MAY 2021


It’s called Dunbar’s number: an influential and oft-repeated theory suggesting the average person can only maintain about 150 stable social relationships with other people.

Proposed by British anthropologist and evolutionary psychologist Robin Dunbar in the early 1990s, Dunbar’s number, extrapolated from research into primate brain sizes and their social groups, has since become a ubiquitous part of the discourse on human social networks.

But just how legitimate is the science behind Dunbar’s number anyway? According to a new analysis by researchers from Stockholm University in Sweden, Dunbar’s famous figure doesn’t add up.

“The theoretical foundation of Dunbar’s number is shaky,” says zoologist and cultural evolution researcher Patrik Lindenfors.

“Other primates’ brains do not handle information exactly as human brains do, and primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are.”

Dunbar’s number was originally predicated on the idea that the volume of the neocortex in primate brains functions as a constraint on the size of the social groups they circulate amongst.

“It is suggested that the number of neocortical neurons limits the organism’s information-processing capacity and that this then limits the number of relationships that an individual can monitor simultaneously,” Dunbar explained in his foundational 1992 study.

“When a group’s size exceeds this limit, it becomes unstable and begins to fragment. This then places an upper limit on the size of groups which any given species can maintain as cohesive social units through time.”

Dunbar began extrapolating the theory to human networks in 1993, and in the decades since has authored and co-authored copious related research output examining the behavioral and cognitive mechanisms underpinning sociality in both humans and other primates.

But as to the original question of whether neocortex size serves as a valid constraint on group size beyond non-human primates, Lindenfors and his team aren’t so sure.

While a number of studies have offered support for Dunbar’s ideas, the new study debunks the claim that neocortex size in primates is equally pertinent to human socialization parameters.

“It is not possible to make an estimate for humans with any precision using available methods and data,” says evolutionary biologist Andreas Wartel.

In their study, the researchers used modern statistical methods including Bayesian and generalized least-squares (GLS) analyses to take another look at the relationship between group size and brain/neocortex sizes in primate brains, with the advantage of updated datasets on primate brains.

The results suggested that stable human group sizes might ultimately be much smaller than 150 individuals – with one analysis suggesting up to 42 individuals could be the average limit, with another estimate ranging between a group of 70 to 107.

Ultimately, however, enormous amounts of imprecision in the statistics suggest that any method like this – trying to compute an average number of stable relationships for any human individual based off brain volume considerations – is unreliable at best.

“Specifying any one number is futile,” the researchers write in their study. “A cognitive limit on human group size cannot be derived in this manner.”

Despite the mainstream attention Dunbar’s number enjoys, the researchers say the majority of primate social evolution research focuses on socio-ecological factors, including foraging and predation, infanticide, and sexual selection – not so much calculations dependent on brain or neocortex volume.

Further, the researchers argue that Dunbar’s number ignores other significant differences in brain physiology between human and non-human primate brains – including that humans develop cultural mechanisms and social structures that can counter socially limiting cognitive factors that might otherwise apply to non-human primates.

“Ecological research on primate sociality, the uniqueness of human thinking, and empirical observations all indicate that there is no hard cognitive limit on human sociality,” the team explains.

“It is our hope, though perhaps futile, that this study will put an end to the use of ‘Dunbar’s number’ within science and in popular media.”

The findings are reported in Biology Letters.

How Facebook got addicted to spreading misinformation (MIT Tech Review)

technologyreview.com

Karen Hao, March 11, 2021


Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

Joaquin Quinonero Candela
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

Joaquin Quinonero Candela
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

Joaquin Quiñonero Candela

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

A former AI researcher who joined in 2018

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

Joaquin Quinonero Candela

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Ellery Roberts Biddle, editorial director of Ranking Digital Rights

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

People with extremist views less able to do complex mental tasks, research suggests (The Guardian)

theguardian.com

Natalie Grover, 22 Feb 2021


Cambridge University team say their findings could be used to spot people at risk from radicalisation
Head jigsaw puzzle
A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto

Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.

Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.

The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.

The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.

Overall, the researchers found that ideological attitudes mirrored cognitive decision-making, according to the study published in the journal Philosophical Transactions of the Royal Society B.

A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.

“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.

She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”

Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.

“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.

In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.

“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”

The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.

The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.

“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”

Hoarding and herding during the COVID-19 pandemic (Science Daily)

The coronavirus pandemic has triggered some interesting and unusual changes in our buying behavior

Date: September 10, 2020

Source: University of Technology Sydney

Summary: Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper.

Rushing to stock up on toilet paper before it vanished from the supermarket isle, stashing cash under the mattress, purchasing a puppy or perhaps planting a vegetable patch — the COVID-19 pandemic has triggered some interesting and unusual changes in our behavior.

Understanding the psychology behind economic decision-making, and how and why a pandemic might trigger responses such as hoarding, is the focus of a new paper published in the Journal of Behavioral Economics for Policy.

‘Hoarding in the age of COVID-19’ by behavioral economist Professor Michelle Baddeley, Deputy Dean of Research at the University of Technology Sydney (UTS) Business School, examines a range of cross-disciplinary explanations for hoarding and other behavior changes observed during the pandemic.

“Understanding these economic, social and psychological responses to COVID-19 can help governments and policymakers adapt their policies to limit negative impacts, and nudge us towards better health and economic outcomes,” says Professor Baddeley.

Governments around the world have implemented behavioral insights units to help guide public policy, and influence public decision-making and compliance.

Hoarding behavior, where people collect or accumulate things such as money or food in excess of their immediate needs, can lead to shortages, or in the case of hoarding cash, have negative impacts on the economy.

“In economics, hoarding is often explored in the context of savings. When consumer confidence is down, spending drops and households increase their savings if they can, because they expect bad times ahead,” explains Professor Baddeley.

“Fear and anxiety also have an impact on financial markets. The VIX ‘fear’ index of financial market volatility saw a dramatic 564% increase between November 2019 and March 2020, as investors rushed to move their money into ‘safe haven’ investments such as bonds.”

While shifts in savings and investments in the face of a pandemic might make economic sense, the hoarding of toilet paper, which also occurred across the globe, is more difficult to explain in traditional economic terms, says Professor Baddeley.

Behavioural economics reveals that our decisions are not always rational or in our long term interest, and can be influenced by a wide range of psychological factors and unconscious biases, particularly in times of uncertainty.

“Evolved instincts dominate in stressful situations, as a response to panic and anxiety. During times of stress and deprivation, not only people but also many animals show a propensity to hoard.”

Another instinct that can come to the fore, particularly in times of stress, is the desire to follow the herd, says Professor Baddeley, whose book ‘Copycats and Contrarians’ explores the concept of herding in greater detail.

“Our propensity to follow others is complex. Some of our reasons for herding are well-reasoned. Herding can be a type of heuristic: a decision-making short-cut that saves us time and cognitive effort,” she says.

“When other people’s choices might be a useful source of information, we use a herding heuristic and follow them because we believe they have good reasons for their actions. We might choose to eat at a busy restaurant because we assume the other diners know it is a good place to eat.

“However numerous experiments from social psychology also show that we can be blindly susceptible to the influence of others. So when we see others rushing to the shops to buy toilet paper, we fear of missing out and follow the herd. It then becomes a self-fulfilling prophesy.”

Behavioral economics also highlights the importance of social conventions and norms in our decision-making processes, and this is where rules can serve an important purpose, says Professor Baddeley.

“Most people are generally law abiding but they might not wear a mask if they think it makes them look like a bit of a nerd, or overanxious. If there is a rule saying you have to wear a mask, this gives people guidance and clarity, and it stops them worrying about what others think.

“So the normative power of rules is very important. Behavioral insights and nudges can then support these rules and policies, to help governments and business prepare for second waves, future pandemics or other global crises.”


Story Source:

Materials provided by University of Technology Sydney. Original written by Leilah Schubert. Note: Content may be edited for style and length.


Journal Reference:

  1. Michelle Baddeley. Hoarding in the age of COVID-19. Journal of Behavioral Economics for Policy, 2020; 4(S): 69-75 [abstract]

The remarkable ways animals understand numbers (BBC Future)

bbc.com

Andreas Nieder, September 7, 2020

(Credit: Press Association)

For some species there is strength and safety in numbers (Credit: Press Association)

Humans as a species are adept at using numbers, but our mathematical ability is something we share with a surprising array of other creatures.

One of the key findings over the past decades is that our number faculty is deeply rooted in our biological ancestry, and not based on our ability to use language. Considering the multitude of situations in which we humans use numerical information, life without numbers is inconceivable.

But what was the benefit of numerical competence for our ancestors, before they became Homo sapiens? Why would animals crunch numbers in the first place?

It turns out that processing numbers offers a significant benefit for survival, which is why this behavioural trait is present in many animal populations. Several studies examining animals in their ecological environments suggest that representing numbers enhances an animal’s ability to exploit food sources, hunt prey, avoid predation, navigate its habitat, and persist in social interactions.

Before numerically competent animals evolved on the planet, single-celled microscopic bacteria – the oldest living organisms on Earth – already exploited quantitative information. The way bacteria make a living is through their consumption of nutrients from their environment. Mostly, they grow and divide themselves to multiply. However, in recent years, microbiologists have discovered they also have a social life and are able to sense the presence or absence of other bacteria. In other words, they can sense the number of bacteria.

Take, for example, the marine bacterium Vibrio fischeri. It has a special property that allows it to produce light through a process called bioluminescence, similar to how fireflies give off light. If these bacteria are in dilute water solutions (where they are essentially alone), they make no light. But when they grow to a certain cell number of bacteria, all of them produce light simultaneously. Therefore, Vibrio fischeri can distinguish when they are alone and when they are together.

Sometimes the numbers don't add up when predators are trying to work out which prey to target (Credit: Alamy)

Sometimes the numbers don’t add up when predators are trying to work out which prey to target (Credit: Alamy)

It turns out they do this using a chemical language. They secrete communication molecules, and the concentration of these molecules in the water increases in proportion to the cell number. And when this molecule hits a certain amount, called a “quorum”, it tells the other bacteria how many neighbours there are, and all the bacteria glow.

This behaviour is called “quorum sensing” – the bacteria vote with signalling molecules, the vote gets counted, and if a certain threshold (the quorum) is reached, every bacterium responds. This behaviour is not just an anomaly of Vibrio fischeri – all bacteria use this sort of quorum sensing to communicate their cell number in an indirect way via signalling molecules.

Remarkably, quorum sensing is not confined to bacteria – animals use it to get around, too. Japanese ants (Myrmecina nipponica), for example, decide to move their colony to a new location if they sense a quorum. In this form of consensus decision making, ants start to transport their brood together with the entire colony to a new site only if a defined number of ants are present at the destination site. Only then, they decide, is it safe to move the colony.

Numerical cognition also plays a vital role when it comes to both navigation and developing efficient foraging strategies. In 2008, biologists Marie Dacke and Mandyam Srinivasan performed an elegant and thoroughly controlled experiment in which they found that bees are able to estimate the number of landmarks in a flight tunnel to reach a food source – even when the spatial layout is changed. Honeybees rely on landmarks to measure the distance of a food source to the hive. Assessing numbers is vital to their survival.

When it comes to optimal foraging, “going for more” is a good rule of thumb in most cases, and seems obvious when you think about it, but sometimes the opposite strategy is favourable. The field mouse loves live ants, but ants are dangerous prey because they bite when threatened. When a field mouse is placed into an arena together with two ant groups of different quantities, then, it surprisingly “goes for less”. In one study, mice that could choose between five versus 15, five versus 30, and 10 versus 30 ants always preferred the smaller quantity of ants. The field mice seem to pick the smaller ant group in order to ensure comfortable hunting and to avoid getting bitten frequently.

Numerical cues play a significant role when it comes to hunting prey in groups, as well. The probability, for example, that wolves capture elk or bison varies with the group size of a hunting party. Wolves often hunt large prey, such as elk and bison, but large prey can kick, gore, and stomp wolves to death. Therefore, there is incentive to “hold back” and let others go in for the kill, particularly in larger hunting parties. As a consequence, wolves have an optimal group size for hunting different prey. For elks, capture success levels off at two to six wolves. However, for bison, the most formidable prey, nine to 13 wolves are the best guarantor of success. Therefore, for wolves, there is “strength in numbers” during hunting, but only up to a certain number that is dependent on the toughness of their prey.

Animals that are more or less defenceless often seek shelter among large groups of social companions – the strength-in-numbers survival strategy hardly needs explaining. But hiding out in large groups is not the only anti-predation strategy involving numerical competence.

In 2005, a team of biologists at the University of Washington found that black-capped chickadees in Europe developed a surprising way to announce the presence and dangerousness of a predator. Like many other animals, chickadees produce alarm calls when they detect a potential predator, such as a hawk, to warn their fellow chickadees. For stationary predators, these little songbirds use their namesake “chick-a-dee” alarm call. It has been shown that the number of “dee” notes at the end of this alarm call indicates the danger level of a predator.

Chickadees produce different numbers of "dee" notes at the end of their call depending on danger they have spotted (Credit: Getty Images)

Chickadees produce different numbers of “dee” notes at the end of their call depending on danger they have spotted (Credit: Getty Images)

A call such as “chick-a-dee-dee” with only two “dee” notes may indicate a rather harmless great grey owl. Great grey owls are too big to manoeuvre and follow the agile chickadees in woodland, so they aren’t a serious threat. In contrast, manoeuvring between trees is no problem for the small pygmy owl, which is why it is one of the most dangerous predators for these small birds. When chickadees see a pygmy owl, they increase the number of “dee” notes and call “chick-a-dee-dee-dee-dee.” Here, the number of sounds serves as an active anti-predation strategy.

Groups and group size also matter if resources cannot be defended by individuals alone – and the ability to assess the number of individuals in one’s own group relative to the opponent party is of clear adaptive value.

Several mammalian species have been investigated in the wild, and the common finding is that numerical advantage determines the outcome of such fights. In a pioneering study, zoologist Karen McComb and co-workers at the University of Sussex investigated the spontaneous behaviour of female lions at the Serengeti National Park when facing intruders. The authors exploited the fact that wild animals respond to vocalisations played through a speaker as though real individuals were present. If the playback sounds like a foreign lion that poses a threat, the lionesses would aggressively approach the speaker as the source of the enemy. In this acoustic playback study, the authors mimicked hostile intrusion by playing the roaring of unfamiliar lionesses to residents.

Two conditions were presented to subjects: either the recordings of single female lions roaring, or of groups of three females roaring together. The researchers were curious to see if the number of attackers and the number of defenders would have an impact on the defender’s strategy. Interestingly, a single defending female was very hesitant to approach the playbacks of a single or three intruders. However, three defenders readily approached the roaring of a single intruder, but not the roaring of three intruders together.

Obviously, the risk of getting hurt when entering a fight with three opponents was foreboding. Only if the number of the residents was five or more did the lionesses approach the roars of three intruders. In other words, lionesses decide to approach intruders aggressively only if they outnumber the latter – another clear example of an animal’s ability to take quantitative information into account.

Our closest cousins in the animal kingdom, the chimpanzees, show a very similar pattern of behaviour. Using a similar playback approach, Michael Wilson and colleagues from Harvard University found that the chimpanzees behaved like military strategists. They intuitively follow equations used by military forces to calculate the relative strengths of opponent parties. In particular, chimpanzees follow predictions made in Lanchester’s “square law” model of combat. This model predicts that, in contests with multiple individuals on each side, chimpanzees in this population should be willing to enter a contest only if they outnumber the opposing side by a factor of at least 1.5. And that is precisely what wild chimps do.

Lionesses judge how many intruders they may be facing before approaching them (Credit: Alamy)

Lionesses judge how many intruders they may be facing before approaching them (Credit: Alamy)

Staying alive – from a biological stance – is a means to an end, and the aim is the transmission of genes. In mealworm beetles (Tenebrio molitor), many males mate with many females, and competition is intense. Therefore, a male beetle will always go for more females in order to maximise his mating opportunities. After mating, males even guard females for some time to prevent further mating acts from other males. The more rivals a male has encountered before mating, the longer he will guard the female after mating.

It is obvious that such behaviour plays an important role in reproduction and therefore has a high adaptive value. Being able to estimate quantity has improved males’ sexual competitiveness. This may in turn be a driving force for more sophisticated cognitive quantity estimation throughout evolution.

One may think that everything is won by successful copulation. But that is far from the truth for some animals, for whom the real prize is fertilising an egg. Once the individual male mating partners have accomplished their part in the play, the sperm continues to compete for the fertilisation of the egg. Since reproduction is of paramount importance in biology, sperm competition causes a variety of adaptations at the behavioural level.

In both insects and vertebrates, the males’ ability to estimate the magnitude of competition determines the size and composition of the ejaculate. In the pseudoscorpion, Cordylochernes scorpioides, for example, it is common that several males copulate with a single female. Obviously, the first male has the best chances of fertilising this female’s egg, whereas the following males face slimmer and slimmer chances of fathering offspring. However, the production of sperm is costly, so the allocation of sperm is weighed considering the chances of fertilising an egg.

Males smell the number of competitor males that have copulated with a female and adjust by progressively decreasing sperm allocation as the number of different male olfactory cues increases from zero to three.

Some bird species, meanwhile, have invented a whole arsenal of trickery to get rid of the burden of parenthood and let others do the job. Breeding a clutch and raising young are costly endeavours, after all. They become brood parasites by laying their eggs in other birds’ nests and letting the host do all the hard work of incubating eggs and feeding hatchlings. Naturally, the potential hosts are not pleased and do everything to avoid being exploited. And one of the defence strategies the potential host has at its disposal is the usage of numerical cues.

American coots, for example, sneak eggs into their neighbours’ nests and hope to trick them into raising the chicks. Of course, their neighbours try to avoid being exploited. A study in the coots’ natural habitat suggests that potential coot hosts can count their own eggs, which helps them to reject parasitic eggs. They typically lay an average-sized clutch of their own eggs, and later reject any surplus parasitic egg. Coots therefore seem to assess the number of their own eggs and ignore any others.

An even more sophisticated type of brood parasitism is found in cowbirds, a songbird species that lives in North America. In this species, females also deposit their eggs in the nests of a variety of host species, from birds as small as kinglets to those as large as meadowlarks, and they have to be smart in order to guarantee that their future young have a bright future.

Cowbird eggs hatch after exactly 12 days of incubation; if incubation is only 11 days, the chicks do not hatch and are lost. It is therefore not an accident that the incubation times for the eggs of the most common hosts range from 11 to 16 days, with an average of 12 days. Host birds usually lay one egg per day – once one day elapses with no egg added by the host to the nest, the host has begun incubation. This means the chicks start to develop in the eggs, and the clock begins ticking. For a cowbird female, it is therefore not only important to find a suitable host, but also to precisely time their egg laying appropriately. If the cowbird lays her egg too early in the host nest, she risks her egg being discovered and destroyed. But if she lays her egg too late, incubation time will have expired before her cowbird chick can hatch.

Female cowbirds perform some incredible mental arithmetic to know when she should lay her eggs in the next of a host bird (Credit: Alamy)

Female cowbirds perform some incredible mental arithmetic to know when she should lay her eggs in the next of a host bird (Credit: Alamy)

Clever experiments by David J White and Grace Freed-Brown from the University of Pennsylvania suggest that cowbird females carefully monitor the host’s clutch to synchronise their parasitism with a potential host’s incubation. The cowbird females watch out for host nests in which the number of eggs has increased since her first visit. This guarantees that the host is still in the laying process and incubation has not yet started. In addition, the cowbird is looking out for nests that contain exactly one additional egg per number of days that have elapsed since her initial visit.

For instance, if the cowbird female visited a nest on the first day and found one host egg in the nest, she will only deposit her own egg if the host nest contains three eggs on the third day. If the nest contains fewer additional eggs than the number of days that have passed since the last visit, she knows that incubation has already started and it is useless for her to lay her own egg. It is incredibly cognitively demanding, since the female cowbird needs to visit a nest over multiple days, remember the clutch size from one day to the next, evaluate the change in the number of eggs in the nest from a past visit to the present, assess the number of days that have passed, and then compare these values to make a decision to lay her egg or not.

But this is not all. Cowbird mothers also have sinister reinforcement strategies. They keep watch on the nests where they’ve laid their eggs. In an attempt to protect their egg, the cowbirds act like mafia gangsters. If the cowbird finds that her egg has been destroyed or removed from the host’s nest, she retaliates by destroying the host bird’s eggs, pecking holes in them or carrying them out of the nest and dropping them on the ground. The host birds better raise the cowbird nestling, or else they have to pay dearly. For the host parents, it may therefore be worth to go through all the trouble of raising a foster chick from an adaptive point of view.

The cowbird is an astounding example of how far evolution has driven some species to stay in the business of passing on their genes. The existing selection pressures, whether imposed by the inanimate environment or by other animals, force populations of species to maintain or increase adaptive traits caused by specific genes. If assessing numbers helps in this struggle to survive and reproduce, it surely is appreciated and relied on.

This explains why numerical competence is so widespread in the animal kingdom: it evolved either because it was discovered by a previous common ancestor and passed on to all descendants, or because it was invented across different branches of the animal tree of life.

Irrespective of its evolutionary origin, one thing is certain – numerical competence is most certainly an adaptive trait.

* This article originally appeared in The MIT Press Reader, and is republished under a Creative Commons licence. Andreas Nieder is Professor of Animal Physiology and Director of the Institute of Neurobiology at the University of Tübingen and the author of A Brain for Numbers, from which this article is adapted.

Exponential growth bias: The numerical error behind Covid-19 (BBC/Future)

A basic mathematical calculation error has fuelled the spread of coronavirus (Credit: Reuters)

Original article

By David Robson – 12th August 2020

A simple mathematical mistake may explain why many people underestimate the dangers of coronavirus, shunning social distancing, masks and hand-washing.

Imagine you are offered a deal with your bank, where your money doubles every three days. If you invest just $1 today, roughly how long will it take for you to become a millionaire?

Would it be a year? Six months? 100 days?

The precise answer is 60 days from your initial investment, when your balance would be exactly $1,048,576. Within a further 30 days, you’d have earnt more than a billion. And by the end of the year, you’d have more than $1,000,000,000,000,000,000,000,000,000,000,000,000 – an “undecillion” dollars.

If your estimates were way out, you are not alone. Many people consistently underestimate how fast the value increases – a mistake known as the “exponential growth bias” – and while it may seem abstract, it may have had profound consequences for people’s behaviour this year.

A spate of studies has shown that people who are susceptible to the exponential growth bias are less concerned about Covid-19’s spread, and less likely to endorse measures like social distancing, hand washing or mask wearing. In other words, this simple mathematical error could be costing lives – meaning that the correction of the bias should be a priority as we attempt to flatten curves and avoid second waves of the pandemic around the world.

To understand the origins of this particular bias, we first need to consider different kinds of growth. The most familiar is “linear”. If your garden produces three apples every day, you have six after two days, nine after three days, and so on.

Exponential growth, by contrast, accelerates over time. Perhaps the simplest example is population growth; the more people you have reproducing, the faster the population grows. Or if you have a weed in your pond that triples each day, the number of plants may start out low – just three on day two, and nine on day three – but it soon escalates (see diagram, below).

Many people assume that coronavirus spreads in a linear fashion, but unchecked it's exponential (Credit: Nigel Hawtin)

Many people assume that coronavirus spreads in a linear fashion, but unchecked it’s exponential (Credit: Nigel Hawtin)

Our tendency to overlook exponential growth has been known for millennia. According to an Indian legend, the brahmin Sissa ibn Dahir was offered a prize for inventing an early version of chess. He asked for one grain of wheat to be placed on the first square on the board, two for the second square, four for the third square, doubling each time up to the 64th square. The king apparently laughed at the humility of ibn Dahir’s request – until his treasurers reported that it would outstrip all the food in the land (18,446,744,073,709,551,615 grains in total).

It was only in the late 2000s that scientists started to study the bias formally, with research showing that most people – like Sissa ibn Dahir’s king – intuitively assume that most growth is linear, leading them to vastly underestimate the speed of exponential increase.

These initial studies were primarily concerned with the consequences for our bank balance. Most savings accounts offer compound interest, for example, where you accrue additional interest on the interest you have already earned. This is a classic example of exponential growth, and it means that even low interest rates pay off handsomely over time. If you have a 5% interest rate, then £1,000 invested today will be worth £1,050 next year, and £1,102.50 the year after… which adds up to more than £7,000 in 40 years’ time. Yet most people don’t recognise how much more bang for their buck they will receive if they start investing early, so they leave themselves short for their retirement.

If the number of grains on a chess board doubled for each square, the 64th would 'hold' 18 quintillion (Credit: Getty Images)

If the number of grains on a chess board doubled for each square, the 64th would ‘hold’ 18 quintillion (Credit: Getty Images)

Besides reducing their savings, the bias also renders people more vulnerable to unfavourable loans, where debt escalates over time. According to one study from 2008, the bias increases someone’s debt-to-income ratio from an average of 23% to an average of 54%.

Surprisingly, a higher level of education does not prevent people from making these errors. Even mathematically trained science students can be vulnerable, says Daniela Sele, who researchs economic decision making at the Swiss Federal Institute of Technology in Zurich. “It does help somewhat, but it doesn’t preclude the bias,” she says.

This may be because they are relying on their intuition rather than deliberative thinking, so that even if they have learned about things like compound interest, they forget to apply them. To make matters worse, most people will confidently report understanding exponential growth but then still fall for the bias when asked to estimate things like compound interest.

As I explored in my book The Intelligence Trap, intelligent and educated people often have a “bias blind spot”, believing themselves to be less susceptible to error than others – and the exponential growth bias appears to fall dead in its centre.

Most people will confidently report understanding exponential growth but then still fall for the bias

It was only this year – at the start of the Covid-19 pandemic – that researchers began to consider whether the bias might also influence our understanding of infectious diseases.

According to various epidemiological studies, without intervention the number of new Covid-19 cases doubles every three to four days, which was the reason that so many scientists advised rapid lockdowns to prevent the pandemic from spiralling out of control.

In March, Joris Lammers at the University of Bremen in Germany joined forces with Jan Crusius and Anne Gast at the University of Cologne to roll out online surveys questioning people about the potential spread of the disease. Their results showed that the exponential growth bias was prevalent in people’s understanding of the virus’s spread, with most people vastly underestimating the rate of increase. More importantly, the team found that those beliefs were directly linked to the participants’ views on the best ways to contain the spread. The worse their estimates, the less likely they were to understand the need for social distancing: the exponential growth bias had made them complacent about the official advice.

The charts that politicians show often fail to communicate exponential growth effectively (Credit: Reuters)

The charts that politicians show often fail to communicate exponential growth effectively (Credit: Reuters)

This chimes with other findings by Ritwik Banerjee and Priyama Majumda at the Indian Institute of Management in Bangalore, and Joydeep Bhattacharya at Iowa State University. In their study (currently under peer-review), they found susceptibility to the exponential growth bias can predict reduced compliance with the World Health Organization’s recommendations – including mask wearing, handwashing, the use of sanitisers and self-isolation.

The researchers speculate that some of the graphical representations found in the media may have been counter-productive. It’s common for the number of infections to be presented on a “logarithmic scale”, in which the figures on the y-axis increase by a power of 10 (so the gap between 1 and 10 is the same as the gap between 10 and 100, or 100 and 1000).

While this makes it easier to plot different regions with low and high growth rates, it means that exponential growth looks more linear than it really is, which could reinforce the exponential growth bias. “To expect people to use the logarithmic scale to extrapolate the growth path of a disease is to demand a very high level of cognitive ability,” the authors told me in an email. In their view, simple numerical tables may actually be more powerful.

Even a small effort to correct this bias could bring huge benefits

The good news is that people’s views are malleable. When Lammers and colleagues reminded the participants of the exponential growth bias, and asked them to calculate the growth in regular steps over a two week period, people hugely improved their estimates of the disease’s spread – and this, in turn, changed their views on social distancing. Sele, meanwhile, has recently shown that small changes in framing can matter. Emphasising the short amount of time that it will take to reach a large number of cases, for instance – and the time that would be gained by social distancing measures – improves people’s understanding of accelerating growth, rather than simply stating the percentage increase each day.

Lammers believes that the exponential nature of the virus needs to be made more salient in coverage of the pandemic. “I think this study shows how media and government should report on a pandemic in such a situation. Not only report the numbers of today and growth over the past week, but also explain what will happen in the next days, week, month, if the same accelerating growth persists,” he says.

He is confident that even a small effort to correct this bias could bring huge benefits. In the US, where the pandemic has hit hardest, it took only a few months for the virus to infect more than five million people, he says. “If we could have overcome the exponential growth bias and had convinced all Americans of this risk back in March, I am sure 99% would have embraced all possible distancing measures.”

David Robson is the author of The Intelligence Trap: Why Smart People Do Dumb Things (WW Norton/Hodder & Stoughton), which examines the psychology of irrational thinking and the best ways to make wiser decisions.

O efeito Dunning-Kruger, ou por que os ignorantes acham que são especialistas (Universo Racionalista)

[A ironia do autor parece indicar que ele não entendeu muito bem o assunto de que trata. Há frases inconsistentes, como “o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo”, por exemplo. RT]

Por Julio Batista – fev 20, 2020

Imagem via Pxhere.

Artigo original em português

Traduzido por Julio Batista
Original de Alexandru Micu no ZME Science

O efeito Dunning-Kruger é um viés cognitivo que foi descrito pela primeira vez no trabalho de David Dunning e Justin Kruger no (agora famoso) estudo de 1999 Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments.

O estudo nasceu baseado em um caso criminal de um rapaz chamado McArthur Wheeler que, em plena luz do dia de 19 de abril de 1995, decidiu roubar dois bancos em Pittsburg, Estados Unidos. Wheeler portava uma arma, mas não uma máscara. Câmeras de vigilância o registraram em flagrante, e a polícia divulgou sua foto nas notícias locais, recebendo várias denúncias de onde ele estava quase que imediatamente.

Um gráfico mostrando o efeito Dunning-Kruger. Imagem adaptada do Wikimedia.

Quando eles foram o prender, o Sr. Wheeler estava visivelmente confuso.

“Mas eu estava coberto de suco”, ele disse, antes que os oficiais o levassem.

Não existe “métodos infalíveis”

Em algum momento de sua vida, Wheeler aprendeu de alguém que o suco de limão poderia ser usado como uma ‘tinta invisível’. Se algo fosse escrito em um pedaço de papel usando suco de limão, você não veria nada – a não ser que você aquecesse o suco, o que tornaria os rabiscos visíveis. Então, naturalmente, ele cobriu seu rosto de suco de limão e foi assaltar um banco, confiante de que sua identidade permaneceria secreta para as câmeras, desde que ele não chegasse perto de nenhuma fonte de calor.

Ainda assim, devemos dar créditos pro sujeito: Wheeler não apostou cegamente. Ele realmente testou sua teoria tirando uma selfie com uma câmera polaroid (existe um cientista dentro de todos nós). Por alguma razão ou outra, talvez porque o filme estava com defeito, não sabemos exatamente o porquê, a câmera revelou uma imagem em branco.

As notícias circularam pelo mundo, todo mundo deu uma boa risada, e o Sr. Wheeler foi levado para a cadeia. A polícia concluiu que ele não era louco, nem usava drogas, ele realmente acreditava que seu plano funcionaria. “Durante sua interação com a polícia, ele ficou incrédulo sobre como sua ignorância havia falhado com ele”, escreveu Anupum Pant para a Awesci.

David Dunning estava trabalhando como psicólogo na Universidade Cornell na época, e a história bizarra chamou sua atenção. Com a ajuda de Justin Kruger, um de seus alunos de pós-graduação, ele começou a entender como o Sr. Wheeler podia estar tão confiante em um plano que era claramente estúpido. A teoria que eles desenvolveram é que quase todos nós consideramos nossas habilidades em determinadas áreas acima da média e que a maioria provavelmente avalia as próprias habilidades como muito melhores do que elas são objetivamente – uma “ilusão de confiança” que sustenta o efeito Dunning-Kruger.

Sejamos todos sem noção

“Cuidado com o vão”… entre como você se vê e como realmente é. Imagem via Pxfuel.

“Se você é incompetente, você não pode saber que é incompetente”, escreveu Dunning no seu livro Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself.

“As habilidades necessárias para produzir uma resposta certa são exatamente as habilidades necessárias para reconhecer o que é uma resposta certa”.

No estudo de 1999 (o primeiro realizado sobre o tópico), a dupla fez uma série de perguntas aos alunos de Cornell sobre gramática, lógica e humor (usadas para medir as habilidades reais dos alunos) e, em seguida, pediu que cada um avaliasse a pontuação geral que eles alcançariam e como suas pontuações se relacionariam às pontuações dos outros participantes. Eles descobriram que os estudantes com a pontuação mais baixa, superestimaram consistente e substancialmente suas próprias capacidades. Os alunos do quartil inferior (25% mais baixos por nota) pensaram que atavam acima de dois terços em média dos outros estudantes (ou seja, que ficaram entre os 33% melhores por pontuação).

Um estudo relacionado realizado pelos mesmo autores em um clube de tiro esportivo mostrou resultados semelhantes. Dunning e Kruger usaram uma metodologia semelhante, fazendo perguntas aos aficionados sobre segurança de armas, visando que estes estimassem a si próprios sobre seus desempenhos no teste. Aqueles que responderam o menor número de perguntas de forma correta também superestimaram demasiadamente seu domínio do conhecimento sobre armas de fogo.

Não é específico apenas às habilidades técnicas, pois afeta todas as esferas da existência humana por igual. Um estudo descobriu que 80% dos motoristas se classificam como acima da média, o que é literalmente impossível, porque não é assim que as médias funcionam. Tendemos a avaliar nossa popularidade relativa da mesma maneira.

Também não se limita a pessoas com habilidades baixas ou inexistentes em um determinado assunto – funciona em praticamente todos nós. Em seu primeiro estudo, Dunning e Kruger também descobriram que os alunos que pontuavam no quartil superior (25%) subestimavam rotineiramente sua própria competência.

Uma definição mais completa do efeito Dunning-Kruger seria que ele representa um viés na estimativa de nossa própria capacidade decorrente de nossa perspectiva limitada. Quando temos uma compreensão ruim ou inexistente sobre um tópico, sabemos literalmente muito pouco para entender o quão pouco sabemos. Aqueles que de fato possuem o conhecimento ou habilidades, no entanto, têm uma ideia muito melhor que as outras pessoas com quem andam. Mas eles também pensam que, se uma tarefa é clara e simples para eles, também deve ser assim para todos os outros.

Uma pessoa no primeiro grupo e uma no segundo grupo são igualmente suscetíveis de usar sua própria experiência como base e tendem a dar como certo que todos estão próximos dessa “base”. Ambos tem “ilusão de confiança” – em um, essa confiança eles tem em si mesmos, e no outro, eles tem em todos as outras pessoas.

Mas talvez não sejamos igualmente sem noção

Errar é humano. Mas, persistir com confiança no erro é hilário.

Dunning e Kruger pareciam encontrar uma saída para o efeito que ajudaram a documentar. Embora todos pareçamos ter a mesma probabilidade de nos iludir, há uma diferença importante entre aqueles que são confiantes, mas incapazes, e aqueles que são capazes e não têm confiança: a forma que lidam e absorvem o feedback ao próprio comportamento.

O Sr. Wheeler tentou verificar sua teoria. No entanto, ele olhou para uma polaroid em branco de uma foto que ele tinha acabado de tirar – um dos grandes motivos que sinalizava que algo não deu muito certo na sua teoria – e não viu motivo para se preocupar; a única explicação que ele aceitou foi que seu plano funcionava. Mais tarde, ele recebeu um feedback da polícia, mas nem isso conseguiu diminuir sua certeza; ele estava “incrédulo em como sua ignorância havia falhado com ele”, mesmo quando ele tinha absoluta confirmação (estando na prisão) de que isso falhou.

Durante sua pesquisa, Dunning e Kruger descobriram que bons alunos previam melhor seu desempenho em exames futuros quando recebessem feedback preciso sobre a pontuação que alcançaram atualmente e sobre sua classificação relativa entre a turma. Os alunos com pior desempenho não mudariam suas expectativas, mesmo após um feedback claro e repetido de que estavam tendo um desempenho ruim. Eles simplesmente insistiram que suas suposições estavam corretas.

Brincadeiras à parte, o efeito Dunning-Kruger não é uma falha humana; é simplesmente um produto da nossa compreensão subjetiva do mundo. Na verdade, serve como uma precaução contra supor que estamos sempre certos e serve pra destacar a importância de manter uma mente aberta e uma visão crítica de nossa própria capacidade.

Mas se você tem medo de ser incompetente, verifique como o feedback afeta sua visão sobre seu próprio trabalho, conhecimento, habilidades e como isso se relaciona com outras pessoas ao seu redor. Se você realmente é um incompetente, não vai mudar de ideia e esse processo é basicamente uma perda de tempo, mas não se preocupe – alguém lhe dirá que você é incompetente.

E você não vai acreditar neles.

Conspiracy theories: how belief is rooted in evolution – not ignorance (The Conversation)

December 13, 2019 9.33am EST – original article

Mikael Klintman PhD, Professor, Lund University

Despite creative efforts to tackle it, belief in conspiracy theories, alternative facts and fake news show no sign of abating. This is clearly a huge problem, as seen when it comes to climate change, vaccines and expertise in general – with anti-scientific attitudes increasingly influencing politics.

So why can’t we stop such views from spreading? My opinion is that we have failed to understand their root causes, often assuming it is down to ignorance. But new research, published in my book, Knowledge Resistance: How We Avoid Insight from Others, shows that the capacity to ignore valid facts has most likely had adaptive value throughout human evolution. Therefore, this capacity is in our genes today. Ultimately, realising this is our best bet to tackle the problem.

So far, public intellectuals have roughly made two core arguments about our post-truth world. The physician Hans Rosling and the psychologist Steven Pinker argue it has come about due to deficits in facts and reasoned thinking – and can therefore be sufficiently tackled with education.

Meanwhile, Nobel Prize winner Richard Thaler and other behavioural economists have shown how the mere provision of more and better facts often lead already polarised groups to become even more polarised in their beliefs.

Tyler Merbler/Flickr, CC BY-SA

The conclusion of Thaler is that humans are deeply irrational, operating with harmful biases. The best way to tackle it is therefore nudging – tricking our irrational brains – for instance by changing measles vaccination from an opt-in to a less burdensome opt-out choice.

Such arguments have often resonated well with frustrated climate scientists, public health experts and agri-scientists (complaining about GMO-opposers). Still, their solutions clearly remain insufficient for dealing with a fact-resisting, polarised society.

Evolutionary pressures

In my comprehensive study, I interviewed numerous eminent academics at the University of Oxford, London School of Economics and King’s College London, about their views. They were experts on social, economic and evolutionary sciences. I analysed their comments in the context of the latest findings on topics raging from the origin of humanity, climate change and vaccination to religion and gender differences.

It became evident that much of knowledge resistance is better understood as a manifestation of social rationality. Essentially, humans are social animals; fitting into a group is what’s most important to us. Often, objective knowledge-seeking can help strengthen group bonding – such as when you prepare a well-researched action plan for your colleagues at work.

But when knowledge and group bonding don’t converge, we often prioritise fitting in over pursuing the most valid knowledge. In one large experiment, it turned out that both liberals and conservatives actively avoided having conversations with people of the other side on issues of drug policy, death penalty and gun ownership. This was the case even when they were offered a chance of winning money if they discussed with the other group. Avoiding the insights from opposing groups helped people dodge having to criticise the view of their own community.

Similarly, if your community strongly opposes what an overwhelming part of science concludes about vaccination or climate change, you often unconsciously prioritise avoiding getting into conflicts about it.

This is further backed up by research showing that the climate deniers who score the highest on scientific literacy tests are more confident than the average in that group that climate change isn’t happening – despite the evidence showing this is the case. And those among the climate concerned who score the highest on the same tests are more confident than the average in that group that climate change is happening.

This logic of prioritising the means that get us accepted and secured in a group we respect is deep. Those among the earliest humans who weren’t prepared to share the beliefs of their community ran the risk of being distrusted and even excluded.

And social exclusion was an enormous increased threat against survival – making them vulnerable to being killed by other groups, animals or by having no one to cooperate with. These early humans therefore had much lower chances of reproducing. It therefore seems fair to conclude that being prepared to resist knowledge and facts is an evolutionary, genetic adaptation of humans to the socially challenging life in hunter-gatherer societies.

Today, we are part of many groups and internet networks, to be sure, and can in some sense “shop around” for new alliances if our old groups don’t like us. Still, humanity today shares the same binary mindset and strong drive to avoid being socially excluded as our ancestors who only knew about a few groups. The groups we are part of also help shape our identity, which can make it hard to change groups. Individuals who change groups and opinions constantly may also be less trusted, even among their new peers.

In my research, I show how this matters when it comes to dealing with fact resistance. Ultimately, we need to take social aspects into account when communicating facts and arguments with various groups. This could be through using role models, new ways of framing problems, new rules and routines in our organisations and new types of scientific narratives that resonate with the intuitions and interests of more groups than our own.

There are no quick fixes, of course. But if climate change were reframed from the liberal/leftist moral perspective of the need for global fairness to conservative perspectives of respect for the authority of the father land, the sacredness of God’s creation and the individual’s right not to have their life project jeopardised by climate change, this might resonate better with conservatives.

If we take social factors into account, this would help us create new and more powerful ways to fight belief in conspiracy theories and fake news. I hope my approach will stimulate joint efforts of moving beyond disputes disguised as controversies over facts and into conversations about what often matters more deeply to us as social beings.

Sente com as entranhas? Seu corpo tem um segundo cérebro dentro da barriga (UOL Saúde)

30/05/201704h00

Tem um segundo cérebro dentro da sua barriga

Tem um segundo cérebro dentro da sua barriga. Getty Images/iStockphoto

Sabe esse seu cérebro aí na cabeça? Ele não é tão único assim não como a gente imagina e conta com uma grande ajuda de um parceiro para controlar nossas emoções, nosso humor e nosso comportamento. Isso porque o corpo humano tem o que muitos chamam de um “segundo cérebro”. E em um lugar bem especial: na nossa barriga.

O “segundo cérebro”, como é chamado informalmente, está situado bem ao longo dos nove metros de seu intestino e reúne milhões de neurônios. Na verdade, faz parte de algo com uma nomenclatura um pouquinho mais complicada: o Sistema Nervoso Entérico.

Getty Images

Dentro do nosso intestino há entre 200 e 600 milhões de neurônios

Funções que até o cérebro duvida

Uma das razões principais para ele ser considerado um cérebro é a grande e complexa rede de neurônios existentes nesse sistema. Para se ter uma ideia, nós temos ali entre 200 milhões e 600 milhões de neurônios, de acordo com pesquisadores da Universidade de Melbourne, na Austrália, que trabalham em conjunto com o cérebro principal.

É como se tivéssemos o cérebro de um gato na nossa barriga. Ele tem 20 diferentes tipos de neurônios, a mesma diversidade encontrada no nosso cérebro grande, onde temos 100 bilhões de neurônios”

Heribert Watzke, cientista alimentar durante em uma palestra na TED Talks

As funções desse cérebro são várias e ocorrem de forma autônoma e integrada ao grande cérebro. Antes, imaginava-se que o cérebro maior enviava sinais para comandar esse outro cérebro, Mas, na verdade, é o contrário: o cérebro em nosso intestino envia sinais por meio de uma grande “rodovia” de neurônios para a cabeça, que pode aceitar ou não as indicações.

“O cérebro de cima pode interferir nesses sinais, modificando-os ou inibindo-os. Há sinais de fome, que nosso estômago vazio envia para o cérebro. Tem sinais que mandam a gente parar de comer quando estamos cheios. Se o sinal da fome é ignorado, pode gerar a doença anorexia, por exemplo. O mais comum é o de continuar comendo, mesmo depois que nossos sinais do estômago dizem ‘ok, pare, transferimos energia suficiente'”, complementa Watzke.

A quantidade de neurônios assusta, mas faz sentido se pensarmos nos perigos da alimentação. Assim como a pele, o intestino tem que parar imediatamente potenciais invasores perigosos em nosso organismo, como bactérias e vírus.

Esse segundo cérebro pode ativar uma diarreia ou alertar o seu “superior”, que pode decidir por acionar vômitos. É um trabalho em grupo e de vital importância.

iStock

Muito além da digestão

É claro que uma das funções principais tem a ver com a nossa digestão e excreção – como se o cérebro maior não quisesse “sujar as mãos”, né? Ele inclusive controla contrações musculares, liberação de substâncias químicas e afins. O segundo cérebro não é usado em funções como pensamentos, religião, filosofia ou poesia, mas está ligado ao nosso humor.

O sistema entérico nervoso nos ajuda a “sentir” nosso mundo interior e seu conteúdo. Segundo a revista Scientific American, é provável que boa parte das nossas emoções sejam influenciadas por causa dos neurônios em nosso intestino.

Já ouviu a expressão “borboletas no estômago”? A sensação é um exemplo disso, como uma resposta a um estresse psicológico.

É por conta disso que algumas pesquisas tentam até tratamento de depressão atuando nos neurônios do intestino. O sistema nervoso entérico tem 95% de nossa serotonina (substância conhecida como uma das responsáveis pela felicidade). Ele pode até ter um papel no autismo.

Há ainda relatos de outras doenças que possam ter a ver com esse segundo cérebro. Um estudo da Nature em 2010 apontou que modificações no funcionamento do sistema podem evitar a osteoporose.

Getty Images

Vida nas entranhas

O “segundo cérebro” tem como uma de suas principais funções a defesa do nosso corpo, já que é um dos grandes responsáveis por controlar nossos anticorpos. Um estudo de 2016 com apoio da Fapesp mostrou como os neurônios se comunicam com as células de defesa no intestino. Há até uma “conversa” com micróbios, já que o sistema nervoso ajuda a ditar quais deles podem habitar o intestino.

Pesquisas apontam que a importância do segundo cérebro é realmente enorme. Em uma delas, foi percebido que ratos recém-nascidos cujos estômagos foram expostos a um químico irritante são mais depressivos e ansiosos do que outros ratos, com os sintomas prosseguindo por um bom tempo depois do dano físico. O mesmo não ocorreu com outros danos, como uma irritação na pele.

Com tudo isso em vista, tenho certeza que você vai olhar para suas vísceras de uma maneira diferente agora, né? Pensa bem: na próxima vez que você estiver estressado ou triste e for comer aquela comida bem gorda para confortar, pode não ser culpa só da sua cabeça.

Excessive empathy can impair understanding of others (Science Daily)

Date:
April 28, 2016
Source:
Julius-Maximilians-Universität Würzburg, JMU
Summary:
People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists has established.

Excessive empathy can impair understanding as a new study conducted by psychologists from Würzburg and Leipzig has established. Credit: © ibreakstock / Fotolia

People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists from Würzburg and Leipzig has established.

Imagine your best friend tells you that his girlfriend has just proposed “staying friends.” Now you have to accomplish two things: Firstly, you have to grasp that this nice sounding proposition actually means that she wants to break up with him and secondly, you should feel with your friend and comfort him.

Whether empathy and understanding other people’s mental states (mentalising) — i.e. the ability to understand what others know, plan and want — are interrelated has recently been examined by the psychologists Anne Böckler, Philipp Kanske, Mathis Trautwein, Franca Parianen-Lesemann and Tania Singer.

Anne Böckler has been a junior professor at the University of Würzburg’s Institute of Psychology since October 2015. Previously, the post-doc had worked in the Department of Social Neurosciences at the Max Planck Institute of Human Cognitive and Brain Sciences in Leipzig where she conducted the study together with her co-workers. In the scientific journal Social Cognitive and Affective Neuroscience, the scientists present the results of their work.

“Successful social interaction is based on our ability to feel with others and to understand their thoughts and intentions,” Anne Böckler explains. She says that it had been unclear previously whether and to what extend these two skills were interrelated — that is whether people who empathise easily with others are also capable of grasping their thoughts and intentions. According to the junior professor, the scientists also looked into the question of whether the neuronal networks responsible for these abilities interact.

Answers can be gleaned from the study conducted by Anne Böckler, Philipp Kanske and their colleagues at the Max Planck Institute in Leipzig within the scope of a large-scale study led by Tania Singer which included some 200 participants. The study enabled the scientists to prove that people who tend to be empathic do not necessarily understand other people well at a cognitive level. Hence, social skills seem to be based on multiple abilities that are rather independent of one another.

The study also delivered new insight as to how the different networks in the brain are orchestrated, revealing that networks crucial for empathy and cognitive perspective-taking interact with one another. In highly emotional moments — for example when somebody talks about the death of a close person — activation of the insula, which forms part of the empathy-relevant network, can have an inhibiting effect in some people on brain areas important for taking someone else’s perspective. And this in turn can cause excessive empathy to impair social understanding.

The participants to the study watched a number of video sequences in which the narrator was more or less emotional. Afterwards, they had to rate how they felt and how much compassion they felt for the person in the film. Then they had to answer questions about the video — for example what the persons could have thought, known or intended. Having thus identified persons with a high level of empathy, the psychologists looked at their portion among the test participants who had had good or poor results in the test about cognitive perspective-taking — and vice versa.

Using functional magnetic resonance imaging, the scientists observed which areas of the brain where active at what time.

The authors believe that the results of this study are important both for neuroscience and clinical applications. For example, they suggest that training aimed at improving social skills, the willingness to empathise and the ability to understand others at the cognitive level and take their perspective should be promoted selectively and separately of one another. The group in the Department of Social Neurosciences in Leipzig is currently working on exactly this topic within the scope of the ReSource project, namely how to specifically train different social skills.


Journal Reference:

  1. Artyom Zinchenko, Philipp Kanske, Christian Obermeier, Erich Schröger, Sonja A. Kotz. Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflictSocial Cognitive and Affective Neuroscience, 2015; 10 (11): 1577 DOI: 10.1093/scan/nsv050

The Boy Whose Brain Could Unlock Autism (Matter)

 

Autism changed Henry Markram’s family. Now his Intense World theory could transform our understanding of the condition.


SOMETHING WAS WRONG with Kai Markram. At five days old, he seemed like an unusually alert baby, picking his head up and looking around long before his sisters had done. By the time he could walk, he was always in motion and required constant attention just to ensure his safety.

“He was super active, batteries running nonstop,” says his sister, Kali. And it wasn’t just boyish energy: When his parents tried to set limits, there were tantrums—not just the usual kicking and screaming, but biting and spitting, with a disproportionate and uncontrollable ferocity; and not just at age two, but at three, four, five and beyond. Kai was also socially odd: Sometimes he was withdrawn, but at other times he would dash up to strangers and hug them.

Things only got more bizarre over time. No one in the Markram family can forget the 1999 trip to India, when they joined a crowd gathered around a snake charmer. Without warning, Kai, who was five at the time, darted out and tapped the deadly cobra on its head.

Coping with such a child would be difficult for any parent, but it was especially frustrating for his father, one of the world’s leading neuroscientists. Henry Markram is the man behind Europe’s $1.3 billion Human Brain Project, a gargantuan research endeavor to build a supercomputer model of the brain. Markram knows as much about the inner workings of our brains as anyone on the planet, yet he felt powerless to tackle Kai’s problems.

“As a father and a neuroscientist, you realize that you just don’t know what to do,” he says. In fact, Kai’s behavior—which was eventually diagnosed as autism—has transformed his father’s career, and helped him build a radical new theory of autism: one that upends the conventional wisdom. And, ironically, his sideline may pay off long before his brain model is even completed.

IMAGINE BEING BORN into a world of bewildering, inescapable sensory overload, like a visitor from a much darker, calmer, quieter planet. Your mother’s eyes: a strobe light. Your father’s voice: a growling jackhammer. That cute little onesie everyone thinks is so soft? Sandpaper with diamond grit. And what about all that cooing and affection? A barrage of chaotic, indecipherable input, a cacophony of raw, unfilterable data.

Just to survive, you’d need to be excellent at detecting any pattern you could find in the frightful and oppressive noise. To stay sane, you’d have to control as much as possible, developing a rigid focus on detail, routine and repetition. Systems in which specific inputs produce predictable outputs would be far more attractive than human beings, with their mystifying and inconsistent demands and their haphazard behavior.

This, Markram and his wife, Kamila, argue, is what it’s like to be autistic.

They call it the “intense world” syndrome.

The behavior that results is not due to cognitive deficits—the prevailing view in autism research circles today—but the opposite, they say. Rather than being oblivious, autistic people take in too much and learn too fast. While they may appear bereft of emotion, the Markrams insist they are actually overwhelmed not only by their own emotions, but by the emotions of others.

Consequently, the brain architecture of autism is not just defined by its weaknesses, but also by its inherent strengths. The developmental disorder now believed to affect around 1 percent of the population is not characterized by lack of empathy, the Markrams claim. Social difficulties and odd behavior result from trying to cope with a world that’s just too much.

After years of research, the couple came up with their label for the theory during a visit to the remote area where Henry Markram was born, in the South African part of the Kalahari desert. He says “intense world” was Kamila’s phrase; she says she can’t recall who hit upon it. But he remembers sitting in the rust-colored dunes, watching the unusual swaying yellow grasses while contemplating what it must be like to be inescapably flooded by sensation and emotion.

That, he thought, is what Kai experiences. The more he investigated the idea of autism not as a deficit of memory, emotion and sensation, but an excess, the more he realized how much he himself had in common with his seemingly alien son.


HENRY MARKRAM IS TALL, with intense blue eyes, sandy hair and the air of unmistakable authority that goes with the job of running a large, ambitious, well-funded research project. It’s hard to see what he might have in common with a troubled, autistic child. He rises most days at 4 a.m. and works for a few hours in his family’s spacious apartment in Lausanne before heading to the institute, where the Human Brain Project is based. “He sleeps about four or five hours,” says Kamila. “That’s perfect for him.”

As a small child, Markram says, he “wanted to know everything.” But his first few years of high school were mostly spent “at the bottom of the F class.” A Latin teacher inspired him to pay more attention to his studies, and when a beloved uncle became profoundly depressed and died young—he was only in his 30s, but “just went downhill and gave up”—Markram turned a corner. He’d recently been given an assignment about brain chemistry, which got him thinking. “If chemicals and the structure of the brain can change and then I change, who am I? It’s a profound question. So I went to medical school and wanted to become a psychiatrist.”

Markram attended the University of Cape Town, but in his fourth year of medical school, he took a fellowship in Israel. “It was like heaven,” he says, “It was all the toys that I ever could dream of to investigate the brain.” He never returned to med school, and married his first wife, Anat, an Israeli, when he was 26. Soon, they had their first daughter, Linoy, now 24, then a second, Kali, now 23. Kai came four years afterwards.

During graduate research at the Weizmann Institute in Israel, Markram made his first important discovery, elucidating a key relationship between two neurotransmitters involved in learning, acetylcholine and glutamate. The work was important and impressive—especially so early in a scientist’s career—but it was what he did next that really made his name.

During a postdoc with Nobel laureate Bert Sakmann at Germany’s Max Planck Institute, Markram showed how brain cells that “fire together, wire together.” That had been a basic tenet of neuroscience since the 1940s—but no one had been able to figure out how the process actually worked.

By studying the precise timing of electrical signaling between neurons, Markram demonstrated that firing in specific patterns increases the strength of the synapses linking cells, while missing the beat weakens them. This simple mechanism allows the brain to learn, forging connections both literally and figuratively between various experiences and sensations—and between cause and effect.

Measuring these fine temporal distinctions was also a technical triumph. Sakmann won his 1991 Nobel for developing the required “patch clamp” technique, which measures the tiny changes in electrical activity inside nerve cells. To patch just one neuron, you first harvest a sliver of brain, about 1/3 of a millimeter thick and containing around 6 million neurons, typically from a freshly guillotined rat.

To keep the tissue alive, you bubble it in oxygen, and bathe the slice of brain in a laboratory substitute for cerebrospinal fluid. Under a microscope, using a minuscule glass pipette, you carefully pierce a single cell. The technique is similar to injecting a sperm into an egg for in vitro fertilization—except that neurons are hundreds of times smaller than eggs.

It requires steady hands and exquisite attention to detail. Markram’s ultimate innovation was to build a machine that could study 12 such carefully prepared cells simultaneously, measuring their electrical and chemical interactions. Researchers who have done it say you can sometimes go a whole day without getting one right—but Markram became a master.

Still, there was a problem. He seemed to go from one career peak to another—a Fulbright at the National Institutes of Health, tenure at Weizmann, publication in the most prestigious journals—but at the same time it was becoming clear that something was not right in his youngest child’s head. He studied the brain all day, but couldn’t figure out how to help Kai learn and cope. As he told a New York Times reporter earlier this year, “You know how powerless you feel. You have this child with autism and you, even as a neuroscientist, really don’t know what to do.”


AT FIRST, MARKRAM THOUGHT Kai had attention deficit/ hyperactivity disorder (ADHD): Once Kai could move, he never wanted to be still. “He was running around, very difficult to control,” Markram says. As Kai grew, however, he began melting down frequently, often for no apparent reason. “He became more particular, and he started to become less hyperactive but more behaviorally difficult,” Markram says. “Situations were very unpredictable. He would have tantrums. He would be very resistant to learning and to any kind of instruction.”

Preventing Kai from harming himself by running into the street or following other capricious impulses was a constant challenge. Even just trying to go to the movies became an ordeal: Kai would refuse to enter the cinema or hold his hands tightly over his ears.

However, Kai also loved to hug people, even strangers, which is one reason it took years to get a diagnosis. That warmth made many experts rule out autism. Only after multiple evaluations was Kai finally diagnosed with Asperger syndrome, a type of autism that includes social difficulties and repetitive behaviors, but not lack of speech or profound intellectual disability.

“We went all over the world and had him tested, and everybody had a different interpretation,” Markram says. As a scientist who prizes rigor, this infuriated him. He’d left medical school to pursue neuroscience because he disliked psychiatry’s vagueness. “I was very disappointed in how psychiatry operates,” he says.

Over time, trying to understand Kai became Markram’s obsession.

It drove what he calls his “impatience” to model the brain: He felt neuroscience was too piecemeal and could not progress without bringing more data together. “I wasn’t satisfied with understanding fragments of things in the brain; we have to understand everything,” he says. “Every molecule, every gene, every cell. You can’t leave anything out.”

This impatience also made him decide to study autism, beginning by reading every study and book he could get his hands on. At the time, in the 1990s, the condition was getting increased attention. The diagnosis had only been introduced into the psychiatric bible, then the DSM III, in 1980. The 1988 Dustin Hoffman film Rain Man, about an autistic savant, brought the idea that autism was both a disability and a source of quirky intelligence into the popular imagination.

The dark days of the mid–20th century, when autism was thought to be caused by unloving “refrigerator mothers” who icily rejected their infants, were long past. However, while experts now agree that the condition is neurological, its causes remain unknown.

The most prominent theory suggests that autism results from problems with the brain’s social regions, which results in a deficit of empathy. This “theory of mind” concept was developed by Uta Frith, Alan Leslie, and Simon Baron-Cohen in the 1980s. They found that autistic children are late to develop the ability to distinguish between what they know themselves and what others know—something that other children learn early on.

In a now famous experiment, children watched two puppets, “Sally” and “Anne.” Sally has a marble, which she places in a basket and then leaves. While she’s gone, Anne moves Sally’s marble into a box. By age four or five, normal children can predict that Sally will look for the marble in the basket first because she doesn’t know that Anne moved it. But until they are much older, most autistic children say that Sally will look in the box because they know it’s there. While typical children automatically adopt Sally’s point of view and know she was out of the room when Anne hid the marble, autistic children have much more difficulty thinking this way.

The researchers linked this “mind blindness”—a failure of perspective-taking—to their observation that autistic children don’t engage in make-believe. Instead of pretending together, autistic children focus on objects or systems—spinning tops, arranging blocks, memorizing symbols, or becoming obsessively involved with mechanical items like trains and computers.

This apparent social indifference was viewed as central to the condition. Unfortunately, the theory also seemed to imply that autistic people are uncaring because they don’t easily recognize that other people exist as intentional agents who can be loved, thwarted or hurt. But while the Sally-Anne experiment shows that autistic people have difficulty knowing that other people have different perspectives—what researchers call cognitive empathy or “theory of mind”—it doesn’t show that they don’t care when someone is hurt or feeling pain, whether emotional or physical. In terms of caring—technically called affective empathy—autistic people aren’t necessarily impaired.

Sadly, however, the two different kinds of empathy are combined in one English word. And so, since the 1980s, this idea that autistic people “lack empathy” has taken hold.

“When we looked at the autism field we couldn’t believe it,” Markram says. “Everybody was looking at it as if they have no empathy, no theory of mind. And actually Kai, as awkward as he was, saw through you. He had a much deeper understanding of what really was your intention.” And he wanted social contact.

 The obvious thought was: Maybe Kai’s not really autistic? But by the time Markram was fully up to speed in the literature, he was convinced that Kai had been correctly diagnosed. He’d learned enough to know that the rest of his son’s behavior was too classically autistic to be dismissed as a misdiagnosis, and there was no alternative condition that explained as much of his behavior and tendencies. And accounts by unquestionably autistic people, like bestselling memoirist and animal scientist Temple Grandin, raised similar challenges to the notion that autistic people could never really see beyond themselves.

Markram began to do autism work himself as visiting professor at the University of California, San Francisco in 1999. Colleague Michael Merzenich, a neuroscientist, proposed that autism is caused by an imbalance between inhibitory and excitatory neurons. A failure of inhibitions that tamp down impulsive actions might explain behavior like Kai’s sudden move to pat the cobra. Markram started his research there.


MARKRAM MET HIS second wife, Kamila Senderek, at a neuroscience conference in Austria in 2000. He was already separated from Anat. “It was love at first sight,” Kamila says.

Her parents left communist Poland for West Germany when she was five. When she met Markram, she was pursuing a master’s in neuroscience at the Max Planck Institute. When Markram moved to Lausanne to start the Human Brain Project, she began studying there as well.

Tall like her husband, with straight blonde hair and green eyes, Kamila wears a navy twinset and jeans when we meet in her open-plan office overlooking Lake Geneva. There, in addition to autism research, she runs the world’s fourth largest open-access scientific publishing firm, Frontiers, with a network of over 35,000 scientists serving as editors and reviewers. She laughs when I observe a lizard tattoo on her ankle, a remnant of an adolescent infatuation with The Doors.

When asked whether she had ever worried about marrying a man whose child had severe behavioral problems, she responds as though the question never occurred to her. “I knew about the challenges with Kai,” she says, “Back then, he was quite impulsive and very difficult to steer.”

The first time they spent a day together, Kai was seven or eight. “I probably had some blue marks and bites on my arms because he was really quite something. He would just go off and do something dangerous, so obviously you would have to get in rescue mode,” she says, noting that he’d sometimes walk directly into traffic. “It was difficult to manage the behavior,” she shrugs, “But if you were nice with him then he was usually nice with you as well.”

“Kamila was amazing with Kai,” says Markram, “She was much more systematic and could lay out clear rules. She helped him a lot. We never had that thing that you see in the movies where they don’t like their stepmom.”

At the Swiss Federal Institute of Technology in Lausanne (EPFL), the couple soon began collaborating on autism research. “Kamila and I spoke about it a lot,” Markram says, adding that they were both “frustrated” by the state of the science and at not being able to help more. Their now-shared parental interest fused with their scientific drives.

They started by studying the brain at the circuitry level. Markram assigned a graduate student, Tania Rinaldi Barkat, to look for the best animal model, since such research cannot be done on humans.

Barkat happened to drop by Kamila’s office while I was there, a decade after she had moved on to other research. She greeted her former colleagues enthusiastically.

She started her graduate work with the Markrams by searching the literature for prospective animal models. They agreed that the one most like human autism involved rats prenatally exposed to an epilepsy drug called valproic acid (VPA; brand name, Depakote). Like other “autistic” rats, VPA rats show aberrant social behavior and increased repetitive behaviors like excessive self-grooming.

But more significant is that when pregnant women take high doses of VPA, which is sometimes necessary for seizure control, studies have found that the risk of autism in their children increases sevenfold. One 2005 study found that close to 9 percent of these children have autism.

Because VPA has a link to human autism, it seemed plausible that its cellular effects in animals would be similar. A neuroscientist who has studied VPA rats once told me, “I see it not as a model, but as a recapitulation of the disease in other species.”

Barkat got to work. Earlier research showed that the timing and dose of exposure was critical: Different timing could produce opposite symptoms, and large doses sometimes caused physical deformities. The “best” time to cause autistic symptoms in rats is embryonic day 12, so that’s when Barkat dosed them.

At first, the work was exasperating. For two years, Barkat studied inhibitory neurons from the VPA rat cortex, using the same laborious patch-clamping technique perfected by Markram years earlier. If these cells were less active, that would confirm the imbalance that Merzenich had theorized.

She went through the repetitious preparation, making delicate patches to study inhibitory networks. But after two years of this technically demanding, sometimes tedious, and time-consuming work, Barkat had nothing to show for it.

“I just found no difference at all,” she told me, “It looked completely normal.” She continued to patch cell after cell, going through the exacting procedure endlessly—but still saw no abnormalities. At least she was becoming proficient at the technique, she told herself.

Markram was ready to give up, but Barkat demurred, saying she would like to shift her focus from inhibitory to excitatory VPA cell networks. It was there that she struck gold.

 “There was a difference in the excitability of the whole network,” she says, reliving her enthusiasm. The networked VPA cells responded nearly twice as strongly as normal—and they were hyper-connected. If a normal cell had connections to ten other cells, a VPA cell connected with twenty. Nor were they under-responsive. Instead, they were hyperactive, which isn’t necessarily a defect: A more responsive, better-connected network learns faster.

But what did this mean for autistic people? While Barkat was investigating the cortex, Kamila Markram had been observing the rats’ behavior, noting high levels of anxiety as compared to normal rats. “It was pretty much a gold mine then,” Markram says. The difference was striking. “You could basically see it with the eye. The VPAs were different and they behaved differently,” Markram says. They were quicker to get frightened, and faster at learning what to fear, but slower to discover that a once-threatening situation was now safe.

While ordinary rats get scared of an electrified grid where they are shocked when a particular tone sounds, VPA rats come to fear not just that tone, but the whole grid and everything connected with it—like colors, smells, and other clearly distinguishable beeps.

“The fear conditioning was really hugely amplified,” Markram says. “We then looked at the cell response in the amygdala and again they were hyper-reactive, so it made a beautiful story.”


THE MARKRAMS RECOGNIZED the significance of their results. Hyper-responsive sensory, memory and emotional systems might explain both autistic talents and autistic handicaps, they realized. After all, the problem with VPA rats isn’t that they can’t learn—it’s that they learn too quickly, with too much fear, and irreversibly.

They thought back to Kai’s experiences: how he used to cover his ears and resist going to the movies, hating the loud sounds; his limited diet and apparent terror of trying new foods.

“He remembers exactly where he sat at exactly what restaurant one time when he tried for hours to get himself to eat a salad,” Kamila says, recalling that she’d promised him something he’d really wanted if he did so. Still, he couldn’t make himself try even the smallest piece of lettuce. That was clearly overgeneralization of fear.

The Markrams reconsidered Kai’s meltdowns, too, wondering if they’d been prompted by overwhelming experiences. They saw that identifying Kai’s specific sensitivities preemptively might prevent tantrums by allowing him to leave upsetting situations or by mitigating his distress before it became intolerable. The idea of an intense world had immediate practical implications.

 The amygdala.

The VPA data also suggested that autism isn’t limited to a single brain network. In VPA rat brains, both the amygdala and the cortex had proved hyper-responsive to external stimuli. So maybe, the Markrams decided, autistic social difficulties aren’t caused by social-processing defects; perhaps they are the result of total information overload.


CONSIDER WHAT IT MIGHT FEEL like to be a baby in a world of relentless and unpredictable sensation. An overwhelmed infant might, not surprisingly, attempt to escape. Kamila compares it to being sleepless, jetlagged, and hung over, all at once. “If you don’t sleep for a night or two, everything hurts. The lights hurt. The noises hurt. You withdraw,” she says.

Unlike adults, however, babies can’t flee. All they can do is cry and rock, and, later, try to avoid touch, eye contact, and other powerful experiences. Autistic children might revel in patterns and predictability just to make sense of the chaos.

At the same time, if infants withdraw to try to cope, they will miss what’s known as a “sensitive period”—a developmental phase when the brain is particularly responsive to, and rapidly assimilates, certain kinds of external stimulation. That can cause lifelong problems.

Language learning is a classic example: If babies aren’t exposed to speech during their first three years, their verbal abilities can be permanently stunted. Historically, this created a spurious link between deafness and intellectual disability: Before deaf babies were taught sign language at a young age, they would often have lasting language deficits. Their problem wasn’t defective “language areas,” though—it was that they had been denied linguistic stimuli at a critical time. (Incidentally, the same phenomenon accounts for why learning a second language is easy for small children and hard for virtually everyone else.)

This has profound implications for autism. If autistic babies tune out when overwhelmed, their social and language difficulties may arise not from damaged brain regions, but because critical data is drowned out by noise or missed due to attempts to escape at a time when the brain actually needs this input.

The intense world could also account for the tragic similarities between autistic children and abused and neglected infants. Severely maltreated children often rock, avoid eye contact, and have social problems—just like autistic children. These parallels led to decades of blaming the parents of autistic children, including the infamous “refrigerator mother.” But if those behaviors are coping mechanisms, autistic people might engage in them not because of maltreatment, but because ordinary experience is overwhelming or even traumatic.

The Markrams teased out further implications: Social problems may not be a defining or even fixed feature of autism. Early intervention to reduce or moderate the intensity of an autistic child’s environment might allow their talents to be protected while their autism-related disabilities are mitigated or, possibly, avoided.

The VPA model also captures other paradoxical autistic traits. For example, while oversensitivities are most common, autistic people are also frequently under-reactive to pain. The same is true of VPA rats. In addition, one of the most consistent findings in autism is abnormal brain growth, particularly in the cortex. There, studies find an excess of circuits called mini-columns, which can be seen as the brain’s microprocessors. VPA rats also exhibit this excess.

Moreover, extra minicolumns have been found in autopsies of scientists who were not known to be autistic, suggesting that this brain organization can appear without social problems and alongside exceptional intelligence.

Like a high-performance engine, the autistic brain may only work properly under specific conditions. But under those conditions, such machines can vastly outperform others—like a Ferrari compared to a Ford.


THE MARKRAMS’ FIRST PUBLICATION of their intense world research appeared in 2007: a paper on the VPA rat in the Proceedings of the National Academy of Sciences. This was followed by an overview in Frontiers in Neuroscience. The next year, at the Society for Neuroscience (SFN), the field’s biggest meeting, a symposium was held on the topic. In 2010, they updated and expanded their ideas in a second Frontiers paper.

Since then, more than three dozen papers have been published by other groups on VPA rodents, replicating and extending the Markrams’ findings. At this year’s SFN, at least five new studies were presented on VPA autism models. The sensory aspects of autism have long been neglected, but the intense world and VPA rats are bringing it to the fore.

Nevertheless, reaction from colleagues in the field has been cautious. One exception is Laurent Mottron, professor of psychiatry and head of autism research at the University of Montreal. He was the first to highlight perceptual differences as critical in autism—even before the Markrams. Only a minority of researchers even studied sensory issues before him. Almost everyone else focused on social problems.

But when Mottron first proposed that autism is linked with what he calls “enhanced perceptual functioning,” he, like most experts, viewed this as the consequence of a deficit. The idea was that the apparently superior perception exhibited by some autistic people is caused by problems with higher level brain functioning—and it had historically been dismissed as mere“splinter skills,” not a sign of genuine intelligence. Autistic savants had earlier been known as “idiot savants,” the implication being that, unlike “real” geniuses, they didn’t have any creative control of their exceptional minds. Mottron described it this way in a review paper: “[A]utistics were not displaying atypical perceptual strengths but a failure to form global or high level representations.”

 However, Mottron’s research led him to see this view as incorrect. His own and other studies showed superior performance by autistic people not only in “low level” sensory tasks, like better detection of musical pitch and greater ability to perceive certain visual information, but also in cognitive tasks like pattern finding in visual IQ tests.

In fact, it has long been clear that detecting and manipulating complex systems is an autistic strength—so much so that the autistic genius has become a Silicon Valley stereotype. In May, for example, the German software firm SAP announced plans to hire 650 autistic people because of their exceptional abilities. Mathematics, musical virtuosity, and scientific achievement all require understanding and playing with systems, patterns, and structure. Both autistic people and their family members are over-represented in these fields, which suggests genetic influences.

“Our points of view are in different areas [of research,] but we arrive at ideas that are really consistent,” says Mottron of the Markrams and their intense world theory. (He also notes that while they study cell physiology, he images actual human brains.)

Because Henry Markram came from outside the field and has an autistic son, Mottron adds, “He could have an original point of view and not be influenced by all the clichés,” particularly those that saw talents as defects. “I’m very much in sympathy with what they do,” he says, although he is not convinced that they have proven all the details.

Mottron’s support is unsurprising, of course, because the intense world dovetails with his own findings. But even one of the creators of the “theory of mind” concept finds much of it plausible.

Simon Baron-Cohen, who directs the Autism Research Centre at Cambridge University, told me, “I am open to the idea that the social deficits in autism—like problems with the cognitive aspects of empathy, which is also known as ‘theory of mind’—may be upstream from a more basic sensory abnormality.” In other words, the Markrams’ physiological model could be the cause, and the social deficits he studies, the effect. He adds that the VPA rat is an “interesting” model. However, he also notes that most autism is not caused by VPA and that it’s possible that sensory and social defects co-occur, rather than one causing the other.

His collaborator, Uta Frith, professor of cognitive development at University College London, is not convinced. “It just doesn’t do it for me,” she says of the intense world theory. “I don’t want to say it’s rubbish,” she says, “but I think they try to explain too much.”


AMONG AFFECTED FAMILIES, by contrast, the response has often been rapturous. “There are elements of the intense world theory that better match up with autistic experience than most of the previously discussed theories,” says Ari Ne’eman, president of the Autistic Self Advocacy Network, “The fact that there’s more emphasis on sensory issues is very true to life.” Ne’eman and other autistic people fought to get sensory problems added to the diagnosis in DSM-5 — the first time the symptoms have been so recognized, and another sign of the growing receptiveness to theories like intense world.

Steve Silberman, who is writing a history of autism titled NeuroTribes: Thinking Smarter About People Who Think Differently, says, “We had 70 years of autism research [based] on the notion that autistic people have brain deficits. Instead, the intense world postulates that autistic people feel too much and sense too much. That’s valuable, because I think the deficit model did tremendous injury to autistic people and their families, and also misled science.”

Priscilla Gilman, the mother of an autistic child, is also enthusiastic. Her memoir, The Anti-Romantic Child, describes her son’s diagnostic odyssey. Before Benjamin was in preschool, Gilman took him to the Yale Child Study Center for a full evaluation. At the time, he did not display any classic signs of autism, but he did seem to be a candidate for hyperlexia—at age two-and-a-half, he could read aloud from his mother’s doctoral dissertation with perfect intonation and fluency. Like other autistic talents, hyperlexia is often dismissed as a “splinter” strength.

At that time, Yale experts ruled autism out, telling Gilman that Benjamin “is not a candidate because he is too ‘warm’ and too ‘related,’” she recalls. Kai Markram’s hugs had similarly been seen as disqualifying. At twelve years of age, however, Benjamin was officially diagnosed with Autism Spectrum Disorder.

According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.

Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.

“The overwhelmingness of understanding how people feel can lead to either what is perceived as inappropriate emotional response, or to what is perceived as shutting down, which people see as lack of empathy,” says Emily Willingham. Willingham is a biologist and the mother of an autistic child; she also suspects that she herself has Asperger syndrome. But rather than being unemotional, she says, autistic people are “taking it all in like a tsunami of emotion that they feel on behalf of others. Going internal is protective.”

At least one study supports this idea, showing that while autistic people score lower on cognitive tests of perspective-taking—recall Anne, Sally, and the missing marble—they are more affected than typical folks by other people’s feelings. “I have three children, and my autistic child is my most empathetic,” Priscilla Gilman says, adding that when her mother first read about the intense world, she said, “This explains Benjamin.”

Benjamin’s hypersensitivities are also clearly linked to his superior perception. “He’ll sometimes say, ‘Mommy, you’re speaking in the key of D, could you please speak in the key of C? It’s easier for me to understand you and pay attention.”

Because he has musical training and a high IQ, Benjamin can use his own sense of “absolute pitch”—the ability to name a note without hearing another for comparison—to define the problem he’s having. But many autistic people can’t verbalize their needs like this. Kai, too, is highly sensitive to vocal intonation, preferring his favorite teacher because, he explains, she “speaks soft,” even when she’s displeased. But even at 19, he isn’t able to articulate the specifics any better than that.


ON A RECENT VISIT to Lausanne, Kai wears a sky blue hoodie, his gray Chuck Taylor–style sneakers carefully unlaced at the top. “My rapper sneakers,” he says, smiling. He speaks Hebrew and English and lives with his mother in Israel, attending a school for people with learning disabilities near Rehovot. His manner is unselfconscious, though sometimes he scowls abruptly without explanation. But when he speaks, it is obvious that he wants to connect, even when he can’t answer a question. Asked if he thinks he sees things differently than others do, he says, “I feel them different.”

He waits in the Markrams’ living room as they prepare to take him out for dinner. Henry’s aunt and uncle are here, too. They’ve been living with the family to help care for its newest additions: nine-month-old Charlotte and Olivia, who is one-and-a-half years old.

“It’s our big patchwork family,” says Kamila, noting that when they visit Israel, they typically stay with Henry’s ex-wife’s family, and that she stays with them in Lausanne. They all travel constantly, which has created a few problems now and then. None of them will ever forget a tantrum Kai had when he was younger, which got him barred from a KLM flight. A delay upset him so much that he kicked, screamed, and spat.

Now, however, he rarely melts down. A combination of family and school support, an antipsychotic medication that he’s been taking recently, and increased understanding of his sensitivities has mitigated the disabilities Kai associated with his autism.

 “I was a bad boy. I always was hitting and doing a lot of trouble,” Kai says of his past. “I was really bad because I didn’t know what to do. But I grew up.” His relatives nod in agreement. Kai has made tremendous strides, though his parents still think that his brain has far greater capacity than is evident in his speech and schoolwork.

As the Markrams see it, if autism results from a hyper-responsive brain, the most sensitive brains are actually the most likely to be disabled by our intense world. But if autistic people can learn to filter the blizzard of data, especially early in life, then those most vulnerable to the most severe autism might prove to be the most gifted of all.

Markram sees this in Kai. “It’s not a mental retardation,” he says, “He’s handicapped, absolutely, but something is going crazy in his brain. It’s a hyper disorder. It’s like he’s got an amplification of many of my quirks.”

One of these involves an insistence on timeliness. “If I say that something has to happen,” he says, “I can become quite difficult. It has to happen at that time.

He adds, “For me it’s an asset, because it means that I deliver. If I say I’ll do something, I do it.” For Kai, however, anticipation and planning run wild. When he travels, he obsesses about every move, over and over, long in advance. “He will sit there and plan, okay, when he’s going to get up. He will execute. You know he will get on that plane come hell or high water,” Markram says. “But he actually loses the entire day. It’s like an extreme version of my quirks, where for me they are an asset and for him they become a handicap.”

If this is true, autistic people have incredible unrealized potential. Say Kai’s brain was even more finely tuned than his father’s, then it might give him the capacity to be even more brilliant. Consider Markram’s visual skills. Like Temple Grandin, whose first autism memoir was titled Thinking In Pictures, he has stunning visual abilities. “I see what I think,” he says, adding that when he considers a scientific or mathematical problem, “I can see how things are supposed to look. If it’s not there, I can actually simulate it forward in time.”

At the offices of Markram’s Human Brain Project, visitors are given a taste of what it might feel like to inhabit such a mind. In a small screening room furnished with sapphire-colored, tulip-shaped chairs, I’m handed 3-D glasses. The instant the lights dim, I’m zooming through a brightly colored forest of neurons so detailed and thick that they appear to be velvety, inviting to the touch.

 The simulation feels so real and enveloping that it is hard to pay attention to the narration, which includes mind-blowing facts about the project. But it is also dizzying, overwhelming. If this is just a smidgen of what ordinary life is like for Kai it’s easier to see how hard his early life must have been. That’s the paradox about autism and empathy. The problem may not be that autistic people can’t understand typical people’s points of view—but that typical people can’t imagine autism.

Critics of the intense world theory are dismayed and put off by this idea of hidden talent in the most severely disabled. They see it as wishful thinking, offering false hope to parents who want to see their children in the best light and to autistic people who want to fight the stigma of autism. In some types of autism, they say, intellectual disability is just that.

“The maxim is, ‘If you’ve seen one person with autism, you’ve seen one person with autism,’” says Matthew Belmonte, an autism researcher affiliated with the Groden Center in Rhode Island. The assumption should be that autistic people have intelligence that may not be easily testable, he says, but it can still be highly variable.

He adds, “Biologically, autism is not a unitary condition. Asking at the biological level ‘What causes autism?’ makes about as much sense as asking a mechanic ‘Why does my car not start?’ There are many possible reasons.” Belmonte believes that the intense world may account for some forms of autism, but not others.

Kamila, however, insists that the data suggests that the most disabled are also the most gifted. “If you look from the physiological or connectivity point of view, those brains are the most amplified.”

The question, then, is how to unleash that potential.

“I hope we give hope to others,” she says, while acknowledging that intense-world adherents don’t yet know how or even if the right early intervention can reduce disability.

The secret-ability idea also worries autistic leaders like Ne’eman, who fear that it contains the seeds of a different stigma. “We agree that autistic people do have a number of cognitive advantages and it’s valuable to do research on that,” he says. But, he stresses, “People have worth regardless of whether they have special abilities. If society accepts us only because we can do cool things every so often, we’re not exactly accepted.”


The MARKRAMS ARE NOW EXPLORING whether providing a calm, predictable early environment—one aimed at reducing overload and surprise—can help VPA rats, soothing social difficulties while nurturing enhanced learning. New research suggests that autism can be detected in two-month-old babies, so the treatment implications are tantalizing.

So far, Kamila says, the data looks promising. Unexpected novelty seems to make the rats worse—while the patterned, repetitive, and safe introduction of new material seems to cause improvement.

In humans, the idea would be to keep the brain’s circuitry calm when it is most vulnerable, during those critical periods in infancy and toddlerhood. “With this intensity, the circuits are going to lock down and become rigid,” says Markram. “You want to avoid that, because to undo it is very difficult.”

For autistic children, intervening early might mean improvements in learning language and socializing. While it’s already clear that early interventions can reduce autistic disability, they typically don’t integrate intense-world insights. The behavioral approach that is most popular—Applied Behavior Analysis—rewards compliance with “normal” behavior, rather than seeking to understand what drives autistic actions and attacking the disabilities at their inception.

Research shows, in fact, that everyone learns best when receiving just the right dose of challenge—not so little that they’re bored, not so much that they’re overwhelmed; not in the comfort zone, and not in the panic zone, either. That sweet spot may be different in autism. But according to the Markrams, it is different in degree, not kind.

Markram suggests providing a gentle, predictable environment. “It’s almost like the fourth trimester,” he says.

To prevent the circuits from becoming locked into fearful states or behavioral patterns you need a filtered environment from as early as possible,” Markram explains. “I think that if you can avoid that, then those circuits would get locked into having the flexibility that comes with security.”

Creating this special cocoon could involve using things like headphones to block excess noise, gradually increasing exposure and, as much as possible, sticking with routines and avoiding surprise. If parents and educators get it right, he concludes, “I think they’ll be geniuses.”

IN SCIENCE, CONFIRMATION BIAS is always the unseen enemy. Having a dog in the fight means you may bend the rules to favor it, whether deliberately or simply because we’re wired to ignore inconvenient truths. In fact, the entire scientific method can be seen as a series of attempts to drive out bias: The double-blind controlled trial exists because both patients and doctors tend to see what they want to see—improvement.

At the same time, the best scientists are driven by passions that cannot be anything but deeply personal. The Markrams are open about the fact that their subjective experience with Kai influences their work.

But that doesn’t mean that they disregard the scientific process. The couple could easily deal with many of the intense world critiques by simply arguing that their theory only applies to some cases of autism. That would make it much more difficult to disprove. But that’s not the route they’ve chosen to take. In their 2010 paper, they list a series of possible findings that would invalidate the intense world, including discovering human cases where the relevant brain circuits are not hyper-reactive, or discovering that such excessive responsiveness doesn’t lead to deficiencies in memory, perception, or emotion. So far, however, the known data has been supportive.

But whether or not the intense world accounts for all or even most cases of autism, the theory already presents a major challenge to the idea that the condition is primarily a lack of empathy, or a social disorder. Intense world theory confronts the stigmatizing stereotypes that have framed autistic strengths as defects, or at least as less significant because of associated weaknesses.

And Henry Markram, by trying to take his son Kai’s perspective—and even by identifying so closely with it—has already done autistic people a great service, demonstrating the kind of compassion that people on the spectrum are supposed to lack. If the intense world does prove correct, we’ll all have to think about autism, and even about typical people’s reactions to the data overload endemic in modern life, very differently.

From left: Kamila, Henry, Kai, and Anat


This story was written by Maia Szalavitz, edited by Mark Horowitz, fact-checked by Kyla Jones, and copy-edited by Tim Heffernan, with photography by Darrin Vanselow and an audiobook narrated by Jack Stewart.


free download
ePub • Kindle • Audiobook

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Extreme weather: Is it all in your mind? (USA Today)

Thomas M. Kostigen, Special for USA TODAY9:53 a.m. EDT October 17, 2015

Weather is not as objective an occurrence as it might seem. People’s perceptions of what makes weather extreme are influenced by where they live, their income, as well as their political views, a new study finds.

There is a difference in both seeing and believing in extreme weather events, according to the study in the journal Environmental Sociology.

“Odds were higher among younger, female, more educated, and Democratic respondents to perceive effects from extreme weather than older, male, less educated, and Republican respondents,” said the study’s author, Matthew Cutler of the University of New Hampshire.

There were other correlations, too. For example, people with lower incomes had higher perceptions of extreme weather than people who earned more. Those who live in more vulnerable areas, as might be expected, interpret the effects of weather differently when the costs to their homes and communities are highest.

Causes of extreme weather and the frequency of extreme weather events is an under-explored area from a sociological perspective. Better understanding is important to building more resilient and adaptive communities. After all, why prepare or take safety precautions if you believe the weather isn’t going to be all that bad or occur all that often?

The U.S. Climate Extremes Index, compiled by the National Oceanic and Atmospheric Administration (NOAA), shows a significant rise in extreme weather events since the 1970s, the most back-to-back years of extremes over the past decade since 1910, and all-time record-high levels clocked in 1998 and 2012.

“Some recent research has demonstrated linkages between objectively measured weather, or climate anomalies, and public concern or beliefs about climate change,” Cutler notes. “But the factors influencing perceptions of extreme or unusual weather events have received less attention.”

Indeed, there is a faction of the public that debates how much the climate is changing and which factors are responsible for such consequences as global warming.

Weather, on the other hand, is a different order of things: it is typically defined in the here and now or in the immediate future. It also is largely confined, because of its variability, to local or regional areas. Moreover, weather is something we usually experience directly.

Climate is a more abstract concept, typically defined as atmospheric conditions over a 30-year period.

When weather isn’t experiential, reports are relied upon to gauge extremes. This is when beliefs become more muddied.

“The patterns found in this research provide evidence that individuals experience extreme weather in the context of their social circumstances and thus perceive the impacts of extreme weather through the lens of cultural and social influences. In other words, it is not simply a matter of seeing to believe, but rather an emergent process of both seeing and believing — individuals experiencing extreme weather and interpreting the impacts against the backdrop of social and economic circumstances central to and surrounding their lives,” Cutler concludes.

Sophocles said, “what people believe prevails over the truth.” The consequences of disbelief come at a price in the context of extreme weather, however, as damage, injury, and death are often results.

Too many times do we hear about people being unprepared for storms, ignoring officials’ warnings, failing to evacuate, or engaging in reckless behavior during weather extremes.

There is a need to draw a more complete picture of “weather prejudice,” as I’ll call it, in order to render more practical advice about preparing, surviving, and recovering from what is indisputable: extreme weather disasters to come.

Thomas M. Kostigen is the founder of TheClimateSurvivalist.com and a New York Times bestselling author and journalist. He is the National Geographic author of “The Extreme Weather Survival Guide: Understand, Prepare, Survive, Recover” and the NG Kids book, “Extreme Weather: Surviving Tornadoes, Tsunamis, Hailstorms, Thundersnow, Hurricanes and More!” Follow him @weathersurvival, or email kostigen@theclimatesurvivalist.com.

What Concepts and Emotions Are (and Aren’t) (Knowledge Ecology)

August 1, 2015

Adam Robbert

Lisa Feldman Barrett has an interesting piece up in yesterday’s New York Times that I think is worth some attention here. Barrett is the director of the The Interdisciplinary Affective Science Laboratory, where she studies the nature of emotional experience. Here is the key part of the article, describing her latest findings:

The Interdisciplinary Affective Science Laboratory (which I direct) collectively analyzed brain-imaging studies published from 1990 to 2011 that examined fear, sadness, anger, disgust and happiness. We divided the human brain virtually into tiny cubes, like 3-D pixels, and computed the probability that studies of each emotion found an increase in activation in each cube.

Overall, we found that no brain region was dedicated to any single emotion. We also found that every alleged “emotion” region of the brain increased its activity during nonemotional thoughts and perceptions as well . . .

Emotion words like “anger,” “happiness” and “fear” each name a population of diverse biological states that vary depending on the context. When you’re angry with your co-worker, sometimes your heart rate will increase, other times it will decrease and still other times it will stay the same. You might scowl, or you might smile as you plot your revenge. You might shout or be silent. Variation is the norm.

This highly distributed, variable, and contextual description of emotions matches up quite well with what scientists have found to be true of conceptualization—namely, that it is a situated process drawn from a plurality of bodily forces. For instance, compare Barrett’s findings above to what I wrote about concepts in my paper on concepts and capacities from June (footnote references are in the paper):

In short, concepts are flexible and distributed modes of bodily organization grounded in modality-specific regions of the brain;[1] they comprise semantic knowledge embodied in perception and action;[2] and they underwrite the organization of sensory experience and guide action within an environment.[3] Concepts are tools for constructing in the mind new pathways of relationship and discrimination, for shaping the body, and for attuning it to contrast. Such pathways are recruited in an ecologically specific way as part of the dynamic bringing-to-apprehension of phenomena.

I think the parallel is clear enough, and we would do well to adopt this more ecological view of emotions and concepts into our thinking. The empirical data is giving us a strong argument for talking about the ecological basis of emotion and conceptuality, a basis that continues to grow stronger by the day.

Can the Bacteria in Your Gut Explain Your Mood? (New York Times)

Eighteen vials were rocking back and forth on a squeaky mechanical device the shape of a butcher scale, and Mark Lyte was beside himself with excitement. ‘‘We actually got some fresh yesterday — freshly frozen,’’ Lyte said to a lab technician. Each vial contained a tiny nugget of monkey feces that were collected at the Harlow primate lab near Madison, Wis., the day before and shipped to Lyte’s lab on the Texas Tech University Health Sciences Center campus in Abilene, Tex.

Lyte’s interest was not in the feces per se but in the hidden form of life they harbor. The digestive tube of a monkey, like that of all vertebrates, contains vast quantities of what biologists call gut microbiota. The genetic material of these trillions of microbes, as well as others living elsewhere in and on the body, is collectively known as the microbiome. Taken together, these bacteria can weigh as much as six pounds, and they make up a sort of organ whose functions have only begun to reveal themselves to science. Lyte has spent his career trying to prove that gut microbes communicate with the nervous system using some of the same neurochemicals that relay messages in the brain.

Inside a closet-size room at his lab that afternoon, Lyte hunched over to inspect the vials, whose samples had been spun down in a centrifuge to a radiant, golden broth. Lyte, 60, spoke fast and emphatically. ‘‘You wouldn’t believe what we’re extracting out of poop,’’ he told me. ‘‘We found that the guys here in the gut make neurochemicals. We didn’t know that. Now, if they make this stuff here, does it have an influence there? Guess what? We make the same stuff. Maybe all this communication has an influence on our behavior.’’

Since 2007, when scientists announced plans for a Human Microbiome Project to catalog the micro-organisms living in our body, the profound appreciation for the influence of such organisms has grown rapidly with each passing year. Bacteria in the gut produce vitamins and break down our food; their presence or absence has been linked to obesity, inflammatory bowel disease and the toxic side effects of prescription drugs. Biologists now believe that much of what makes us human depends on microbial activity. The two million unique bacterial genes found in each human microbiome can make the 23,000 genes in our cells seem paltry, almost negligible, by comparison. ‘‘It has enormous implications for the sense of self,’’ Tom Insel, the director of the National Institute of Mental Health, told me. ‘‘We are, at least from the standpoint of DNA, more microbial than human. That’s a phenomenal insight and one that we have to take seriously when we think about human development.’’

Given the extent to which bacteria are now understood to influence human physiology, it is hardly surprising that scientists have turned their attention to how bacteria might affect the brain. Micro-organisms in our gut secrete a profound number of chemicals, and researchers like Lyte have found that among those chemicals are the same substances used by our neurons to communicate and regulate mood, like dopamine, serotonin and gamma-aminobutyric acid (GABA). These, in turn, appear to play a function in intestinal disorders, which coincide with high levels of major depression and anxiety. Last year, for example, a group in Norway examined feces from 55 people and found certain bacteria were more likely to be associated with depressive patients.

At the time of my visit to Lyte’s lab, he was nearly six months into an experiment that he hoped would better establish how certain gut microbes influenced the brain, functioning, in effect, as psychiatric drugs. He was currently compiling a list of the psychoactive compounds found in the feces of infant monkeys. Once that was established, he planned to transfer the microbes found in one newborn monkey’s feces into another’s intestine, so that the recipient would end up with a completely new set of microbes — and, if all went as predicted, change their neurodevelopment. The experiment reflected an intriguing hypothesis. Anxiety, depression and several pediatric disorders, including autism and hyperactivity, have been linked with gastrointestinal abnormalities. Microbial transplants were not invasive brain surgery, and that was the point: Changing a patient’s bacteria might be difficult but it still seemed more straightforward than altering his genes.

When Lyte began his work on the link between microbes and the brain three decades ago, it was dismissed as a curiosity. By contrast, last September, the National Institute of Mental Health awarded four grants worth up to $1 million each to spur new research on the gut microbiome’s role in mental disorders, affirming the legitimacy of a field that had long struggled to attract serious scientific credibility. Lyte and one of his longtime colleagues, Christopher Coe, at the Harlow primate lab, received one of the four. ‘‘What Mark proposed going back almost 25 years now has come to fruition,’’ Coe told me. ‘‘Now what we’re struggling to do is to figure out the logic of it.’’ It seems plausible, if not yet proved, that we might one day use microbes to diagnose neurodevelopmental disorders, treat mental illnesses and perhaps even fix them in the brain.

In 2011, a team of researchers at University College Cork, in Ireland, and McMaster University, in Ontario, published a study in Proceedings of the National Academy of Science that has become one of the best-known experiments linking bacteria in the gut to the brain. Laboratory mice were dropped into tall, cylindrical columns of water in what is known as a forced-swim test, which measures over six minutes how long the mice swim before they realize that they can neither touch the bottom nor climb out, and instead collapse into a forlorn float. Researchers use the amount of time a mouse floats as a way to measure what they call ‘‘behavioral despair.’’ (Antidepressant drugs, like Zoloft and Prozac, were initially tested using this forced-swim test.)

For several weeks, the team, led by John Cryan, the neuroscientist who designed the study, fed a small group of healthy rodents a broth infused with Lactobacillus rhamnosus, a common bacterium that is found in humans and also used to ferment milk into probiotic yogurt. Lactobacilli are one of the dominant organisms babies ingest as they pass through the birth canal. Recent studies have shown that mice stressed during pregnancy pass on lowered levels of the bacterium to their pups. This type of bacteria is known to release immense quantities of GABA; as an inhibitory neurotransmitter, GABA calms nervous activity, which explains why the most common anti-anxiety drugs, like Valium and Xanax, work by targeting GABA receptors.

Cryan found that the mice that had been fed the bacteria-laden broth kept swimming longer and spent less time in a state of immobilized woe. ‘‘They behaved as if they were on Prozac,’’ he said. ‘‘They were more chilled out and more relaxed.’’ The results suggested that the bacteria were somehow altering the neural chemistry of mice.

Until he joined his colleagues at Cork 10 years ago, Cryan thought about microbiology in terms of pathology: the neurological damage created by diseases like syphilis or H.I.V. ‘‘There are certain fields that just don’t seem to interact well,’’ he said. ‘‘Microbiology and neuroscience, as whole disciplines, don’t tend to have had much interaction, largely because the brain is somewhat protected.’’ He was referring to the fact that the brain is anatomically isolated, guarded by a blood-brain barrier that allows nutrients in but keeps out pathogens and inflammation, the immune system’s typical response to germs. Cryan’s study added to the growing evidence that signals from beneficial bacteria nonetheless find a way through the barrier. Somehow — though his 2011 paper could not pinpoint exactly how — micro-organisms in the gut tickle a sensory nerve ending in the fingerlike protrusion lining the intestine and carry that electrical impulse up the vagus nerve and into the deep-brain structures thought to be responsible for elemental emotions like anxiety. Soon after that, Cryan and a co-author, Ted Dinan, published a theory paper in Biological Psychiatry calling these potentially mind-altering microbes ‘‘psychobiotics.’’

It has long been known that much of our supply of neurochemicals — an estimated 50 percent of the dopamine, for example, and a vast majority of the serotonin — originate in the intestine, where these chemical signals regulate appetite, feelings of fullness and digestion. But only in recent years has mainstream psychiatric research given serious consideration to the role microbes might play in creating those chemicals. Lyte’s own interest in the question dates back to his time as a postdoctoral fellow at the University of Pittsburgh in 1985, when he found himself immersed in an emerging field with an unwieldy name: psychoneuroimmunology, or PNI, for short. The central theory, quite controversial at the time, suggested that stress worsened disease by suppressing our immune system.

By 1990, at a lab in Mankato, Minn., Lyte distilled the theory into three words, which he wrote on a chalkboard in his office: Stress->Immune->Disease. In the course of several experiments, he homed in on a paradox. When he dropped an intruder mouse in the cage of an animal that lived alone, the intruder ramped up its immune system — a boost, he suspected, intended to fight off germ-ridden bites or scratches. Surprisingly, though, this did not stop infections. It instead had the opposite effect: Stressed animals got sick. Lyte walked up to the board and scratched a line through the word ‘‘Immune.’’ Stress, he suspected, directly affected the bacterial bugs that caused infections.

To test how micro-organisms reacted to stress, he filled petri plates with a bovine-serum-based medium and laced the dishes with a strain of bacterium. In some, he dropped norepinephrine, a neurochemical that mammals produce when stressed. The next day, he snapped a Polaroid. The results were visible and obvious: The control plates were nearly barren, but those with the norepinephrine bloomed with bacteria that filigreed in frostlike patterns. Bacteria clearly responded to stress.

Then, to see if bacteria could induce stress, Lyte fed white mice a liquid solution of Campylobacter jejuni, a bacterium that can cause food poisoning in humans but generally doesn’t prompt an immune response in mice. To the trained eye, his treated mice were as healthy as the controls. But when he ran them through a plexiglass maze raised several feet above the lab floor, the bacteria-fed mice were less likely to venture out on the high, unprotected ledges of the maze. In human terms, they seemed anxious. Without the bacteria, they walked the narrow, elevated planks.

Credit: Illustration by Andrew Rae 

Each of these results was fascinating, but Lyte had a difficult time finding microbiology journals that would publish either. ‘‘It was so anathema to them,’’ he told me. When the mouse study finally appeared in the journal Physiology & Behavior in 1998, it garnered little attention. And yet as Stephen Collins, a gastroenterologist at McMaster University, told me, those first papers contained the seeds of an entire new field of research. ‘‘Mark showed, quite clearly, in elegant studies that are not often cited, that introducing a pathological bacterium into the gut will cause a change in behavior.’’

Lyte went on to show how stressful conditions for newborn cattle worsened deadly E. coli infections. In another experiment, he fed mice lean ground hamburger that appeared to improve memory and learning — a conceptual proof that by changing diet, he could change gut microbes and change behavior. After accumulating nearly a decade’s worth of evidence, in July 2008, he flew to Washington to present his research. He was a finalist for the National Institutes of Health’s Pioneer Award, a $2.5 million grant for so-called blue-sky biomedical research. Finally, it seemed, his time had come. When he got up to speak, Lyte described a dialogue between the bacterial organ and our central nervous system. At the two-minute mark, a prominent scientist in the audience did a spit take.

‘‘Dr. Lyte,’’ he later asked at a question-and-answer session, ‘‘if what you’re saying is right, then why is it when we give antibiotics to patients to kill bacteria, they are not running around crazy on the wards?’’

Lyte knew it was a dismissive question. And when he lost out on the grant, it confirmed to him that the scientific community was still unwilling to imagine that any part of our neural circuitry could be influenced by single-celled organisms. Lyte published his theory in Medical Hypotheses, a low-ranking journal that served as a forum for unconventional ideas. The response, predictably, was underwhelming. ‘‘I had people call me crazy,’’ he said.

But by 2011 — when he published a second theory paper in Bioessays, proposing that probiotic bacteria could be tailored to treat specific psychological diseases — the scientific community had become much more receptive to the idea. A Canadian team, led by Stephen Collins, had demonstrated that antibiotics could be linked to less cautious behavior in mice, and only a few months before Lyte, Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, published a landmark paper in Proceedings of the National Academy of Science that showed that mice raised without microbes spent far more time running around outside than healthy mice in a control group; without the microbes, the mice showed less apparent anxiety and were more daring. In Ireland, Cryan published his forced-swim-test study on psychobiotics. There was now a groundswell of new research. In short order, an implausible idea had become a hypothesis in need of serious validation.

Late last year, Sarkis Mazmanian, a microbiologist at the California Institute of Technology, gave a presentation at the Society for Neuroscience, ‘‘Gut Microbes and the Brain: Paradigm Shift in Neuroscience.’’ Someone had inadvertently dropped a question mark from the end, so the speculation appeared to be a definitive statement of fact. But if anyone has a chance of delivering on that promise, it’s Mazmanian, whose research has moved beyond the basic neurochemicals to focus on a broader class of molecules called metabolites: small, equally druglike chemicals that are produced by micro-organisms. Using high-powered computational tools, he also hopes to move beyond the suggestive correlations that have typified psychobiotic research to date, and instead make decisive discoveries about the mechanisms by which microbes affect brain function.

Two years ago, Mazmanian published a study in the journal Cell with Elaine Hsiao, then a graduate student at his lab and now a neuroscientist at Caltech, that made a provocative link between a single molecule and behavior. Their research found that mice exhibiting abnormal communication and repetitive behaviors, like obsessively burying marbles, were mollified when they were given one of two strains of the bacterium Bacteroides fragilis.

The study added to a working hypothesis in the field that microbes don’t just affect the permeability of the barrier around the brain but also influence the intestinal lining, which normally prevents certain bacteria from leaking out and others from getting in. When the intestinal barrier was compromised in his model, normally ‘‘beneficial’’ bacteria and the toxins they produce seeped into the bloodstream and raised the possibility they could slip past the blood-brain barrier. As one of his colleagues, Michael Fischbach, a microbiologist at the University of California, San Francisco, said: ‘‘The scientific community has a way of remaining skeptical until every last arrow has been drawn, until the entire picture is colored in. Other scientists drew the pencil outlines, and Sarkis is filling in a lot of the color.’’

Mazmanian knew the results offered only a provisional explanation for why restrictive diets and antibacterial treatments seemed to help some children with autism: Altering the microbial composition might be changing the permeability of the intestine. ‘‘The larger concept is, and this is pure speculation: Is a disease like autism really a disease of the brain or maybe a disease of the gut or some other aspect of physiology?’’ Mazmanian said. For any disease in which such a link could be proved, he saw a future in drugs derived from these small molecules found inside microbes. (A company he co-founded, Symbiotix Biotherapies, is developing a complex sugar called PSA, which is associated with Bacteroides fragilis, into treatments for intestinal disease and multiple sclerosis.) In his view, the prescriptive solutions probably involve more than increasing our exposure to environmental microbes in soil, dogs or even fermented foods; he believed there were wholesale failures in the way we shared our microbes and inoculated children with these bacteria. So far, though, the only conclusion he could draw was that disorders once thought to be conditions of the brain might be symptoms of microbial disruptions, and it was the careful defining of these disruptions that promised to be helpful in the coming decades.

The list of potential treatments incubating in labs around the world is startling. Several international groups have found that psychobiotics had subtle yet perceptible effects in healthy volunteers in a battery of brain-scanning and psychological tests. Another team in Arizona recently finished an open trial on fecal transplants in children with autism. (Simultaneously, at least two offshore clinics, in Australia and England, began offering fecal microbiota treatments to treat neurological disorders, like multiple sclerosis.) Mazmanian, however, cautions that this research is still in its infancy. ‘‘We’ve reached the stage where there’s a lot of, you know, ‘The microbiome is the cure for everything,’ ’’ he said. ‘‘I have a vested interest if it does. But I’d be shocked if it did.’’

Lyte issues the same caveat. ‘‘People are obviously desperate for solutions,’’ Lyte said when I visited him in Abilene. (He has since moved to Iowa State’s College of Veterinary Medicine.) ‘‘My main fear is the hype is running ahead of the science.’’ He knew that parents emailing him for answers meant they had exhausted every option offered by modern medicine. ‘‘It’s the Wild West out there,’’ he said. ‘‘You can go online and buy any amount of probiotics for any number of conditions now, and my paper is one of those cited. I never said go out and take probiotics.’’ He added, ‘‘We really need a lot more research done before we actually have people trying therapies out.’’

If the idea of psychobiotics had now, in some ways, eclipsed him, it was nevertheless a curious kind of affirmation, even redemption: an old-school microbiologist thrust into the midst of one of the most promising aspects of neuroscience. At the moment, he had a rough map in his head and a freezer full of monkey fecals that might translate, somehow, into telling differences between gregarious or shy monkeys later in life. I asked him if what amounted to a personality transplant still sounded a bit far-fetched. He seemed no closer to unlocking exactly what brain functions could be traced to the same organ that produced feces. ‘‘If you transfer the microbiota from one animal to another, you can transfer the behavior,’’ Lyte said. ‘‘What we’re trying to understand are the mechanisms by which the microbiota can influence the brain and development. If you believe that, are you now out on the precipice? The answer is yes. Do I think it’s the future? I think it’s a long way away.’’

Brain Cells Break Their Own DNA to Allow Memories to Form (IFL Science)

June 22, 2015 | by Justine Alford

photo credit: Courtesy of MIT Researchers 

Given the fundamental importance of our DNA, it is logical to assume that damage to it is undesirable and spells bad news; after all, we know that cancer can be caused by mutations that arise from such injury. But a surprising new study is turning that idea on its head, with the discovery that brain cells actually break their own DNA to enable us to learn and form memories.

While that may sound counterintuitive, it turns out that the damage is necessary to allow the expression of a set of genes, called early-response genes, which regulate various processes that are critical in the creation of long-lasting memories. These lesions are rectified pronto by repair systems, but interestingly, it seems that this ability deteriorates during aging, leading to a buildup of damage that could ultimately result in the degeneration of our brain cells.

This idea is supported by earlier work conducted by the same group, headed by Li-Huei Tsai, at the Massachusetts Institute of Technology (MIT) that discovered that the brains of mice engineered to develop a model of Alzheimer’s disease possessed a significant amount of DNA breaks, even before symptoms appeared. These lesions, which affected both strands of DNA, were observed in a region critical to learning and memory: the hippocampus.

To find out more about the possible consequences of such damage, the team grew neurons in a dish and exposed them to an agent that causes these so-called double strand breaks (DSBs), and then they monitored the gene expression levels. As described in Cellthey found that while the vast majority of genes that were affected by these breaks showed decreased expression, a small subset actually displayed increased expression levels. Importantly, these genes were involved in the regulation of neuronal activity, and included the early-response genes.

Since the early-response genes are known to be rapidly expressed following neuronal activity, the team was keen to find out whether normal neuronal stimulation could also be inducing DNA breaks. The scientists therefore applied a substance to the cells that is known to strengthen the tiny gap between neurons across which information flows – the synapse – mimicking what happens when an organism is exposed to a new experience.

“Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai said in a statement.

So what is the connection between these breaks and the apparent boost in early-response gene expression? After using computers to scrutinize the DNA sequences neighboring these genes, the researchers found that they were enriched with a pattern targeted by an architectural protein that, upon binding, distorts the DNA strands by introducing kinks. By preventing crucial interactions between distant DNA regions, these bends therefore act as a barrier to gene expression. The breaks, however, resolve these constraints, allowing expression to ensue.

These findings could have important implications because earlier work has demonstrated that aging is associated with a decline in the expression of genes involved in the processes of learning and memory formation. It therefore seems likely that the DNA repair system deteriorates with age, but at this stage it is unclear how these changes occur, so the researchers plan to design further studies to find out more.

Problem: Your brain (Medium)

I will be talking mainly about development for the web.

Ilya Dorman, Feb 15, 2015

Our puny brain can handle a very limited amount of logic at a time. While programmers proclaim logic as their domain, they are only sometimes and slightly better at managing complexity than the rest of us, mortals. The more logic our app has, the harder it is to change it or introduce new people to it.

The most common mistake programmers do is assuming they write code for a machine to read. While technically that is true, this mindset leads to the hell that is other people’s code.

I have worked in several start-up companies, some of them even considered “lean.” In each, it took me between few weeks to few months to fully understand their code-base, and I have about 6 years of experience with JavaScript. This does not seem reasonable to me at all.

If the code is not easy to read, its structure is already a monument—you can change small things, but major changes—the kind every start-up undergoes on an almost monthly basis—are as fun as a root canal. Once the code reaches a state, that for a proficient programmer, it is harder to read than this article—doom and suffering is upon you.

Why does the code become unreadable? Let’s compare code to plain text: the longer a sentence is, the easier it is for our mind to forget the beginning of it, and once we reach the end, we forget what was the beginning and lose the meaning of the whole sentence. You had to read the previous sentence twice because it was too long to get in one grasp? Exactly! Same with code. Worse, actually—the logic of code can be way more complex than any sentence from a book or a blog post, and each programmer has his own logic which can be total gibberish to another. Not to mention that we also need to remember the logic. Sometimes we come back to it the same day and sometimes after two month. Nobody remembers anything about their code after not looking at it for two month.

To make code readable to other humans we rely on three things:

1. Conventions

Conventions are good, but they are very limited: enforce them too little and the programmer becomes coupled to the code—no one will ever understand what they meant once they are gone. Enforce too much and you will have hour-long debates about every space and colon (true story.) The “habitable zone” is very narrow and easy to miss.

2. Comments

They are probably the most helpful, if done right. Unfortunately many programmers write their comments in the same spirit they write their code—very idiosyncratic. I do not belong to the school claiming good code needs no comments, but even beautifully commented code can still be extremely complicated.

3. “Other people know this programming language as much as I do, so they must understand my writings.”

Well… This is JavaScript:

This is JAVASCRIPT!

4. Tests

Tests are a devil in disguise. ”How do we make sure our code is good and readable? We write more code!” I know many of you might quit this post right here, but bear with me for a few more lines: regardless of their benefit, tests are another layer of logic. They are more code to be read and understood. Tests try to solve this exact problem: your code is too complicated to calculate it’s result in your brain? So you say “well, this is what should happen in the end.” And when it doesn’t, you go digging for the problem. Your code should be simple enough to read a function or a line and understand what should be the result of running it.

Your life as a programmer could be so much easier!

Solution: Radical Minimalism

I will break down this approach into practical points, but the main idea is: use LESS logic.

  • Cut 80% of your product’s features

Yes! Just like that. Simplicity, first of all, comes from the product. Make it easy for people to understand and use. Make it do one thing well, and only then add up (if there is still a need.)

  • Use nothing but what you absolutely must

Do not include a single line of code (especially from libraries) that you are not 100% sure you will use and that it is the simplest, most straightforward solution available. Need a simple chat app and use Angular.js because it’s nice with the two-way binding? You deserve those hours and days of debugging and debating about services vs. providers.

Side note: The JavaScript browser api is event-driven, it is made to respond when stuff (usually user input) happens. This means that events change data. Many new frameworks (Angular, Meteor) reverse this direction and make data changes trigger events. If your app is simple, you might live happily with the new mysterious layer, but if not — you get a whole new layer of complexity that you need to understand and your life will get exponentially more miserable. Unless your app constantly manages big amounts of data, Avoid those frameworks.

  • Use simplest logic possible

Say you need show different HTML on different occasions. You can use client-side routing with controllers and data passed to each controller that renders the HTML from a template. Or you can just use static HTML pages with normal browser navigation, and update manually the HTML. Use the second.

  • Make short Javascript files

Limit the length of your JS files to a single editor page, and make each file do one thing. Can’t cramp all your glorious logic into small modules? Good, that means you should have less of it, so that other humans will understand your code in reasonable time.

  • Avoid pre-compilers and task-runners like AIDS

The more layers there are between what you write and what you see, the more logic your mind needs to remember. You might think grunt or gulp help you to simplify stuff but then you have 30 tasks that you need to remember what they do to your code, how to use them, update them, and teach them to any new coder. Not to mention compiling.

Side note #1: CSS pre-compilers are OK because they have very little logic but they help a lot in terms of readable structure, compared to plain CSS. I barely used HTML pre-compilers so you’ll have to decide for yourself.

Side note #2: Task-runners could save you time, so if you do use them, do it wisely keeping the minimalistic mindset.

  • Use Javascript everywhere

This one is quite specific, and I am not absolutely sure about it, but having the same language in client and server can simplify the data management between them.

  • Write more human code

Give your non trivial variables (and functions) descriptive names. Make shorter lines but only if it does not compromise readability.

Treat your code like poetry and take it to the edge of the bare minimum.

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century?

Ref:arxiv.org/abs/1401.1219: Consciousness as a State of Matter

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.


Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Denying problems when we don’t like the political solutions (Duke University)

6-Nov-2014

Steve Hartsoe

Duke study sheds light on why conservatives, liberals disagree so vehemently

DURHAM, N.C. — There may be a scientific answer for why conservatives and liberals disagree so vehemently over the existence of issues like climate change and specific types of crime.

A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don’t, then they tend to deny the problem even exists.

“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke’s Fuqua School of Business. “The cure can be more immediately threatening than the problem.”

The study, “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief,” appears in the November issue of the Journal of Personality and Social Psychology (viewable athttp://psycnet.apa.org/journals/psp/107/5/809/).

The researchers conducted three experiments (with samples ranging from 120 to 188 participants) on three different issues — climate change, air pollution that harms lungs, and crime.

“The goal was to test, in a scientifically controlled manner, the question: Does the desirability of a solution affect beliefs in the existence of the associated problem? In other words, does what we call ‘solution aversion’ exist?” Campbell said.

“We found the answer is yes. And we found it occurs in response to some of the most common solutions for popularly discussed problems.”

For climate change, the researchers conducted an experiment to examine why more Republicans than Democrats seem to deny its existence, despite strong scientific evidence that supports it.

One explanation, they found, may have more to do with conservatives’ general opposition to the most popular solution — increasing government regulation — than with any difference in fear of the climate change problem itself, as some have proposed.

Participants in the experiment, including both self-identified Republicans and Democrats, read a statement asserting that global temperatures will rise 3.2 degrees in the 21st century. They were then asked to evaluate a proposed policy solution to address the warming.

When the policy solution emphasized a tax on carbon emissions or some other form of government regulation, which is generally opposed by Republican ideology, only 22 percent of Republicans said they believed the temperatures would rise at least as much as indicated by the scientific statement they read.

But when the proposed policy solution emphasized the free market, such as with innovative green technology, 55 percent of Republicans agreed with the scientific statement.

For Democrats, the same experiment recorded no difference in their belief, regardless of the proposed solution to climate change.

“Recognizing this effect is helpful because it allows researchers to predict not just what problems people will deny, but who will likely deny each problem,” said co-author Aaron Kay, an associate professor at Fuqua. “The more threatening a solution is to a person, the more likely that person is to deny the problem.”

The researchers found liberal-leaning individuals exhibited a similar aversion to solutions they viewed as politically undesirable in an experiment involving violent home break-ins. When the proposed solution called for looser versus tighter gun-control laws, those with more liberal gun-control ideologies were more likely to downplay the frequency of violent home break-ins.

“We should not just view some people or group as anti-science, anti-fact or hyper-scared of any problems,” Kay said. “Instead, we should understand that certain problems have particular solutions that threaten some people and groups more than others. When we realize this, we understand those who deny the problem more and we improve our ability to better communicate with them.”

Campbell added that solution aversion can help explain why political divides become so divisive and intractable.

“We argue that the political divide over many issues is just that, it’s political,” Campbell said. “These divides are not explained by just one party being more anti-science, but the fact that in general people deny facts that threaten their ideologies, left, right or center.”

The researchers noted there are additional factors that can influence how people see the policy implications of science. Additional research using larger samples and more specific methods would provide an even clearer picture, they said.

###

The study was funded by The Fuqua School of Business.

CITATION: Troy Campbell, Aaron Kay, Duke University (2014). “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief.” Journal of Personality and Social Psychology, 107(5), 809-824.http://dx.doi.org/10.1037/a0037963