Arquivo da tag: Polarização

The Facebook whistleblower says its algorithms are dangerous. Here’s why. (MIT Technology Review)

technologyreview.com

Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation.

October 5, 2021

Karen Hao


Facebook whistleblower Frances Haugen testifies during a Senate Committee October 5. Drew Angerer/Getty Images

On Sunday night, the primary source for the Wall Street Journal’s Facebook Files, an investigative series based on internal Facebook documents, revealed her identity in an episode of 60 Minutes.

Frances Haugen, a former product manager at the company, says she came forward after she saw Facebook’s leadership repeatedly prioritize profit over safety.

Before quitting in May of this year, she combed through Facebook Workplace, the company’s internal employee social media network, and gathered a wide swath of internal reports and research in an attempt to conclusively demonstrate that Facebook had willfully chosen not to fix the problems on its platform.

Today she testified in front of the Senate on the impact of Facebook on society. She reiterated many of the findings from the internal research and implored Congress to act.

“I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy,” she said in her opening statement to lawmakers. “These problems are solvable. A safer, free-speech respecting, more enjoyable social media is possible. But there is one thing that I hope everyone takes away from these disclosures, it is that Facebook can change, but is clearly not going to do so on its own.”

During her testimony, Haugen particularly blamed Facebook’s algorithm and platform design decisions for many of its issues. This is a notable shift from the existing focus of policymakers on Facebook’s content policy and censorship—what does and doesn’t belong on Facebook. Many experts believe that this narrow view leads to a whack-a-mole strategy that misses the bigger picture.

“I’m a strong advocate for non-content-based solutions, because those solutions will protect the most vulnerable people in the world,” Haugen said, pointing to Facebook’s uneven ability to enforce its content policy in languages other than English.

Haugen’s testimony echoes many of the findings from an MIT Technology Review investigation published earlier this year, which drew upon dozens of interviews with Facebook executives, current and former employees, industry peers, and external experts. We pulled together the most relevant parts of our investigation and other reporting to give more context to Haugen’s testimony.

How does Facebook’s algorithm work?

Colloquially, we use the term “Facebook’s algorithm” as though there’s only one. In fact, Facebook decides how to target ads and rank content based on hundreds, perhaps thousands, of algorithms. Some of those algorithms tease out a user’s preferences and boost that kind of content up the user’s news feed. Others are for detecting specific types of bad content, like nudity, spam, or clickbait headlines, and deleting or pushing them down the feed.

All of these algorithms are known as machine-learning algorithms. As I wrote earlier this year:

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women.

And because of Facebook’s enormous amounts of user data, it can

develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and [target] ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

The same principles apply for ranking content in news feed:

Just as algorithms [can] be trained to predict who would click what ad, they [can] also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Before Facebook began using machine-learning algorithms, teams used design tactics to increase engagement. They’d experiment with things like the color of a button or the frequency of notifications to keep users coming back to the platform. But machine-learning algorithms create a much more powerful feedback loop. Not only can they personalize what each user sees, they will also continue to evolve with a user’s shifting preferences, perpetually showing each person what will keep them most engaged.

Who runs Facebook’s algorithm?

Within Facebook, there’s no one team in charge of this content-ranking system in its entirety. Engineers develop and add their own machine-learning models into the mix, based on their team’s objectives. For example, teams focused on removing or demoting bad content, known as the integrity teams, will only train models for detecting different types of bad content.

This was a decision Facebook made early on as part of its “move fast and break things” culture. It developed an internal tool known as FBLearner Flow that made it easy for engineers without machine learning experience to develop whatever models they needed at their disposal. By one data point, it was already in use by more than a quarter of Facebook’s engineering team in 2016.

Many of the current and former Facebook employees I’ve spoken to say that this is part of why Facebook can’t seem to get a handle on what it serves up to users in the news feed. Different teams can have competing objectives, and the system has grown so complex and unwieldy that no one can keep track anymore of all of its different components.

As a result, the company’s main process for quality control is through experimentation and measurement. As I wrote:

Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

How has Facebook’s content ranking led to the spread of misinformation and hate speech?

During her testimony, Haugen repeatedly came back to the idea that Facebook’s algorithm incites misinformation, hate speech, and even ethnic violence. 

“Facebook … knows—they have admitted in public—that engagement-based ranking is dangerous without integrity and security systems but then not rolled out those integrity and security systems in most of the languages in the world,” she told the Senate today. “It is pulling families apart. And in places like Ethiopia it is literally fanning ethnic violence.”

Here’s what I’ve written about this previously:

The machine-learning models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.

Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

As Haugen mentioned, Facebook has also known this for a while. Previous reporting has found that it’s been studying the phenomenon since at least 2016.

In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

In my own conversations, Facebook employees also corroborated these findings.

A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

In her testimony, Haugen also repeatedly emphasized how these phenomena are far worse in regions that don’t speak English because of Facebook’s uneven coverage of different languages.

“In the case of Ethiopia there are 100 million people and six languages. Facebook only supports two of those languages for integrity systems,” she said. “This strategy of focusing on language-specific, content-specific systems for AI to save us is doomed to fail.”

She continued: “So investing in non-content-based ways to slow the platform down not only protects our freedom of speech, it protects people’s lives.”

I explore this more in a different article from earlier this year on the limitations of large language models, or LLMs:

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray[, Ethiopia] first broke out in November, [AI ethics researcher Timnit] Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

How does Facebook’s content ranking relate to teen mental health?

One of the more shocking revelations from the Journal’s Facebook Files was Instagram’s internal research, which found that its platform is worsening mental health among teenage girls. “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” researchers wrote in a slide presentation from March 2020.

Haugen connects this phenomenon to engagement-based ranking systems as well, which she told the Senate today “is causing teenagers to be exposed to more anorexia content.”

“If Instagram is such a positive force, have we seen a golden age of teenage mental health in the last 10 years? No, we have seen escalating rates of suicide and depression amongst teenagers,” she continued. “There’s a broad swath of research that supports the idea that the usage of social media amplifies the risk of these mental health harms.”

In my own reporting, I heard from a former AI researcher who also saw this effect extend to Facebook.

The researcher’s team…found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health.

But as with Haugen, the researcher found that leadership wasn’t interested in making fundamental algorithmic changes.

The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers.

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down….

That former employee, meanwhile, no longer lets his daughter use Facebook.

How do we fix this?

Haugen is against breaking up Facebook or repealing Section 230 of the US Communications Decency Act, which protects tech platforms from taking responsibility for the content it distributes.

Instead, she recommends carving out a more targeted exemption in Section 230 for algorithmic ranking, which she argues would “get rid of the engagement-based ranking.” She also advocates for a return to Facebook’s chronological news feed.

Ellery Roberts Biddle, a projects director at Ranking Digital Rights, a nonprofit that studies social media ranking systems and their impact on human rights, says a Section 230 carve-out would need to be vetted carefully: “I think it would have a narrow implication. I don’t think it would quite achieve what we might hope for.”

In order for such a carve-out to be actionable, she says, policymakers and the public would need to have a much greater level of transparency into how Facebook’s ad-targeting and content-ranking systems even work. “I understand Haugen’s intention—it makes sense,” she says. “But it’s tough. We haven’t actually answered the question of transparency around algorithms yet. There’s a lot more to do.”

Nonetheless, Haugen’s revelations and testimony have brought renewed attention to what many experts and Facebook employees have been saying for years: that unless Facebook changes the fundamental design of its algorithms, it will not make a meaningful dent in the platform’s issues. 

Her intervention also raises the prospect that if Facebook cannot put its own house in order, policymakers may force the issue.

“Congress can change the rules that Facebook plays by and stop the many harms it is now causing,” Haugen told the Senate. “I came forward at great personal risk because I believe we still have time to act, but we must act now.”

‘Belonging Is Stronger Than Facts’: The Age of Misinformation (The New York Times)

nytimes.com

Max Fisher


The Interpreter

Social and psychological forces are combining to make the sharing and believing of misinformation an endemic problem with no easy solution.

An installation of protest art outside the Capitol in Washington.
Credit: Jonathan Ernst/Reuters

Published May 7, 2021; Updated May 13, 2021

There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.

Framing everything as a grand conflict against scheming enemies can feel enormously reassuring. And that’s why perhaps the greatest culprit of our era of misinformation may be, more than any one particular misinformer, the era-defining rise in social polarization.

“At the mass level, greater partisan divisions in social identity are generating intense hostility toward opposition partisans,” which has “seemingly increased the political system’s vulnerability to partisan misinformation,” Dr. Nyhan wrote in an earlier paper.

Growing hostility between the two halves of America feeds social distrust, which makes people more prone to rumor and falsehood. It also makes people cling much more tightly to their partisan identities. And once our brains switch into “identity-based conflict” mode, we become desperately hungry for information that will affirm that sense of us versus them, and much less concerned about things like truth or accuracy.

Border officials are not mass-purchasing copies of Vice President Kamala Harris’s book, though the false rumor drew attention.
Credit: Gabriela Bhaskar for The New York Times

In an email, Dr. Nyhan said it could be methodologically difficult to nail down the precise relationship between overall polarization in society and overall misinformation, but there is abundant evidence that an individual with more polarized views becomes more prone to believing falsehoods.

The second driver of the misinformation era is the emergence of high-profile political figures who encourage their followers to indulge their desire for identity-affirming misinformation. After all, an atmosphere of all-out political conflict often benefits those leaders, at least in the short term, by rallying people behind them.

Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.

“Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.

“When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.

How do we fight disinformation? Join Times tech reporters as they untangle the roots of disinformation and how to combat it. Plus we speak to special guest comedian Sarah Silverman. R.S.V.P. to this subscriber-exclusive event.

“Depending on the platform, especially, humans are very sensitive to social reward,” he said. Research demonstrates that people who get positive feedback for posting inflammatory or false statements become much more likely to do so again in the future. “You are affected by that.”

In 2016, the media scholars Jieun Shin and Kjerstin Thorson analyzed a data set of 300 million tweets from the 2012 election. Twitter users, they found, “selectively share fact-checking messages that cheerlead their own candidate and denigrate the opposing party’s candidate.” And when users encountered a fact-check that revealed their candidate had gotten something wrong, their response wasn’t to get mad at the politician for lying. It was to attack the fact checkers.

“We have found that Twitter users tend to retweet to show approval, argue, gain attention and entertain,” researcher Jon-Patrick Allem wrote last year, summarizing a study he had co-authored. “Truthfulness of a post or accuracy of a claim was not an identified motivation for retweeting.”

In another study, published last month in Nature, a team of psychologists tracked thousands of users interacting with false information. Republican test subjects who were shown a false headline about migrants trying to enter the United States (“Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”) mostly identified it as false; only 16 percent called it accurate. But if the experimenters instead asked the subjects to decide whether to share the headline, 51 percent said they would.

“Most people do not want to spread misinformation,” the study’s authors wrote. “But the social media context focuses their attention on factors other than truth and accuracy.”

In a highly polarized society like today’s United States — or, for that matter, India or parts of Europe — those incentives pull heavily toward ingroup solidarity and outgroup derogation. They do not much favor consensus reality or abstract ideals of accuracy.

As people become more prone to misinformation, opportunists and charlatans are also getting better at exploiting this. That can mean tear-it-all-down populists who rise on promises to smash the establishment and control minorities. It can also mean government agencies or freelance hacker groups stirring up social divisions abroad for their benefit. But the roots of the crisis go deeper.

“The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”

In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”

O bolsonarismo como ecossistema, explica Hamilton Carvalho (Poder360)

poder360.com.br

Fenômeno é mais que um movimento

A produção de certezas é um alívio

Sistema agrupa segmentos distintos

Mortos da covid se tornam um detalhe

O presidente com apoiadores no Palácio da Alvorada: bolsonarismo é melhor entendido como um sistema político-social

Hamilton Carvalho 24.abr.2021 (sábado) – 5h50 atualizado: 24.abr.2021 (sábado) – 7h10 5-6 minutos


Google, Nespresso, Amazon e Magalu. Na chamada economia da atenção, a concorrência hoje é, cada vez mais, entre ecossistemas, geralmente capitaneados por uma grande empresa e que abrigam várias organizações em uma rede de dependência e complementaridade.

Ganha quem conseguir satisfazer mais necessidades dos consumidores dentro do mesmo sistema. Para usar o jargão, quem consegue oferecer uma proposição de valor superior.

A ideia em si não é tão nova assim. O impulso veio com a economia digital, mas é possível identificar ecossistemas nos mais diversos contextos, do mundo do futebol e do crime aos sistemas sociais de educação e saúde. Inclusive no conglomerado de organizações que tem se dedicado ao combate à pandemia, que inclui atores do setor privado (como no caso da recente compra do kit intubação) e que deveria ter sido adequadamente capitaneado pelo governo federal.

Mas cá estamos, rumo a meio milhão de mortos. Bolsonaro poderia ter saído como herói da coisa toda, como Bibi em Israel, mas, vivendo da lógica de bunker, preferiu jogar areia nessas engrenagens desde o início, enquanto o Brasil regride institucionalmente a olhos vistos.

Curiosamente, isso não tem sido suficiente para corroer o lastro que o presidente mantém no pedaço conservador de Brasil, que tem racionalizado sem grandes dificuldades o mar de chorume produzido pela covid.

Encarar o bolsonarismo como ecossistema –mais do que um movimento social apoiado por um exército digital– ajuda a entender o fenômeno. Primeiro porque, como sabemos, a atenção das pessoas se tornou superfragmentada e o mundo não anda fácil de ser entendido.

Ecossistemas político-sociais levam vantagem quando conseguem satisfazer uma necessidade humana básica, o conforto das grandes certezas. Uma boa e sólida certeza vale como um barbitúrico irresistível, dizia Nelson Rodrigues. Em um país com nível educacional baixo, essas certezas podem se dar ao luxo de sapatear na cara da realidade.

O bolsonarismo também dá de bandeja aos seguidores uma identidade carregada de tintas morais e, novamente, não há nada de novo aqui –basta lembrar de exemplos próximos, como o chavismo e o lulopetismo. Em outras palavras, o sujeito se sente superior e ganha uma tribo para chamar de sua.

É essa a atual proposição de valor do ecossistema criado em torno do presidente. Não é pouco, ainda que o conjunto já tenha tido mais força quando esgrimia o discurso contra a corrupção e a lábia liberal.

Em torno desse valor, diversos segmentos se agrupam. Tem aquilo que reportagem no El País chamou de QAnon tupiniquim, gente produzindo fake news e usando robôs para influenciar o discurso nas redes sociais.

Tem aquele segmento empresarial “raiz”, madeireiros na Amazônia, por exemplo, fora aquelas grandes empresas que, assim como o Centrão, estão quase sempre à disposição para uma ovacionada, no matter what.

Tem os políticos, os apoiadores de nicho (como os atiradores), os produtores de conteúdo lacrador, os canais de comunicação e parte (presumo) dos militares e policiais. E se a mexerica toda perdeu os lavajatistas, ganhou de presente um gomo suculento que tem sido crucial para sua resiliência, o dos médicos e influenciadores cloroquiners.

Cada segmento desses têm recursos e competências que usa em prol da causa. Por exemplo, a audiência cativa de uma rádio ou a credibilidade extraterrestre que os brasileiros atribuem aos médicos, mesmo que sejam leigos em medicina baseada em evidências.

Cada um deles desempenha atividades diversas mas complementares, reforçando a proposição de valor (lembremos: grandes certezas e identidade moral superior). A lista é longa e inclui organização de protestos, veiculação de programas de opinião em rádio e os encontros empresariais que lustram a legitimidade do governo com o gel do capitalismo de compadrio.

No que é crítico, cada segmento se apropria de uma parte do valor gerado pelo conjunto. Políticos se apropriam de capital eleitoral. Emissoras, de exclusivas com o presidente e audiência. Médicos cloroquiners ganham chuvas de pacientes. Influenciadores e manipuladores de conteúdo ganham seguidores ou, como suspeita a CPMI das fake news, empregos em gabinetes. Entidades empresariais mantem abertos os canais com Brasília. Os mortos são só um detalhe incômodo na paisagem.

Minha percepção é que a disputa de 2022 deve ocorrer mais nesse nível amplificado. Concorrentes precisam começar a colocar de pé seus ecossistemas desde já, de preferência em torno de valores mais racionais e menos divisivos. Não vai ser fácil.

In a polarized world, what does ‘follow the science’ mean? (The Christian Science Monitor)

Why We Wrote This

Science is all about asking questions, but when scientific debates become polarized it can be difficult for average citizens to interpret the merits of various arguments.

August 12, 2020

By Christa Case Bryant Staff writer, Story Hinckley Staff writer

Should kids go back to school? 

One South Korean contact-tracing study suggests that is a bad idea. In analyzing 5,706 COVID-19 patients and their 59,073 contacts, it concluded – albeit with a significant caveat – that 10- to 19-year-olds were the most contagious age group within their household.

A study out of Iceland, meanwhile, found that children under 10 are less likely to get infected and less likely than adults to become ill if they are infected. Coauthor Kári Stefánsson, who is CEO of a genetics company tracking the disease’s spread, said the study didn’t find a single instance of a child infecting a parent.

So when leaders explain their decision on whether to send kids back to school by saying they’re “following the science,” citizens could be forgiven for asking what science they’re referring to exactly – and how sure they are that it’s right. 

But it’s become difficult to ask such questions amid the highly polarized debate around pandemic policies. While areas of consensus have emerged since the pandemic first hit the United States in March, significant gaps remain. Those uncertainties have opened the door for contrarians to gain traction in popular thought.

Some Americans see them as playing a crucial role, challenging a fear-driven groupthink that is inhibiting scientific inquiry, driving unconstitutional restrictions on individual freedom and enterprise, and failing to grapple with the full societal cost of shutting down businesses, churches, and schools. Public health experts who see shutdowns as crucial to saving lives are critical of such actors, due in part to fears that they are abetting right-wing resistance to government restrictions. They have also voiced criticism that some contrarians appear driven by profit or political motives more than genuine concern about public health.

The deluge of studies and competing interpretations have left citizens in a tough spot, especially when data or conclusions are shared on Twitter or TV without full context – like a handful of puzzle pieces thrown in your face, absent any box top picture to help you fit them together. 

“You can’t expect the public to go through all the science, so you rely on people of authority, someone whom you trust, to parse that for you,” says Aleszu Bajak, a science and data journalist who teaches at Northeastern University in Boston. “But now you have more than just the scientists in their ivory tower throwing out all of this information. You have competing pundits, with different incentives, drawing on different science of varying quality.”

The uncertainties have also posed a challenge for policymakers, who haven’t had the luxury of waiting for the full arc of scientific inquiry to be completed.

“The fact is, science, like everything else, is uncertain – particularly when it comes to predictions,” says John Holdren, who served as director of the White House Office of Science and Technology Policy for the duration of President Barack Obama’s eight-year tenure. “I think seasoned, experienced decision-makers understand that. They understand that there will be uncertainties, even in the scientific inputs to their decision-making process, and they have to take those into account and they have to seek approaches that are resilient to uncertain outcomes.” 

Some say that in an effort to reassure citizens that shutdowns were implemented based on scientific input, policymakers weren’t transparent enough about the underlying uncertainties. 

“We’ve heard constantly that politicians are following the science. That’s good, of course, but … especially at the beginning, science is tentative, it changes, it’s evolving fast, it’s uncertain,” Prof. Sir Paul Nurse, director of the Francis Crick Institute in London, recently told a British Parliament committee. One of the founding partners of his independent institute is Imperial College, whose researchers’ conclusions were a leading driver of U.S. and British government shutdowns.

“You can’t just have a single top line saying we’re following science,” he adds. “It has to be more dealing with what we know about the science and what we don’t.” 

Rick Bowmer/AP Granite School District teachers join others gathered at the Granite School District Office on Aug. 4, 2020, in Salt Lake City, to protest the district’s plans for reopening. Teachers showed up in numbers to make sure the district’s school board knew their concerns.

A focus on uncertainty

One scientist who talks a lot about unknowns is John Ioannidis, a highly cited professor of medicine, epidemiology, and population health at Stanford University in California.

Dr. Ioannidis, who has made a career out of poking holes in his colleagues’ research, agrees that masks and social distancing are effective but says there are open questions about how best to implement them. He has also persistently questioned just how deadly COVID-19 is and to what extent shutdowns are affecting mental health, household transmission to older family members, and the well-being of those with non-COVID-19-relatedconditions.

It’s very difficult, he says, to do randomized trials for things like how to reopen, and different countries and U.S. states have done things in different ways.

“For each one of these decisions, action plans – people said we’re using the best science,” he says. “But how can it be that they’re all using the best science when they’re so different?”

Many scientists say they and their colleagues have been open about the uncertainties,despite a highly polarized debate around the pandemic and the 2020 election season ramping up. 

“One of the remarkable things about this pandemic is the extent to which many people in the scientific community are explicit about what’s uncertain,” says Marc Lipsitch, a professor of epidemiology and director of the Center for Communicable Disease Dynamics at the Harvard T.H. Chan School of Public Health who is working on a study about how biases can affect COVID-19 research. “There has been a sort of hard core of scientists, even with different policy predispositions, who have been insistent on that.”

“In some ways the politicized nature has made people more aware of the uncertainties,” adds Professor Lipsitch, who says Twitter skeptics push him and his colleagues to strengthen their arguments. “That’s a good voice to have in the back of your head.” 

For the Harvard doctor, Alex Berenson is not that voice. But a growing number of frustrated Americans have gravitated toward the former New York Times reporter’s brash, unapologetic challenging of prevailing narratives. His following on Twitter has grown from around 10,000 to more than 182,000 and counting. 

Mr. Berenson, who investigated big business before leaving The New York Times in 2010 to write spy novels, dives into government data, quotes from scientific studies, and takes to Twitter daily to rail against what he sees as a dangerous overreaction driven by irrational fear and abetted by a liberal media agenda and corporate interests – particularly tech companies, whose earnings have soared during the shutdowns. He refers satirically to those advocating government restrictions as “Team Apocalypse.”

Dr. Lipsitch says that while public health experts pushing for lockdown like himself could be considered hawks while contrarians like Mr. Berenson could be considered doves, his “name-calling” doesn’t take into account the fact that most scientists have at least a degree of nuance. “It’s really sort of unsophisticated to say there are two camps, but it serves some people’s interest to demonize the other side,” he says.

Mr. Berenson, the author of a controversial 2019 book arguing that marijuana increases the risk of mental illness and violence, has been accused of cherry-picking data and conflating correlation and causation. Amazon initially blocked publication of his booklet “Unreported Truths about COVID-19 and Lockdowns: Part 1” until Elon Musk got wind of it and called out the tech giant on Twitter. Mr. Berenson prevailed and recently released Part 2 on the platform, which has already become Amazon’s No. 1 best-seller among history of science and medicine e-books.

He strives to broaden the public’s contextual understanding of fatality rates, emphasizing that the vast majority of deaths occur among the elderly; in Italy, for instance, the median age of people who died is 81. He calls into question the reliability of COVID-19 death tolls, which according to the Centers for Disease Control and Prevention can be categorized as such even without a positive test if the disease is assumed to have caused or even contributed to a death.

Earlier this spring, when a prominent model was forecasting overwhelmed hospitals in New York, he pointed out that their projection was quadruple that of the actual need. 

“Nobody had the guts or brains to ask – why is your model off by a factor of four today, and you made it last week?” says Mr. Berenson, referring to the University of Washington’s Institute for Health Metrics and Evaluation projection in early April and expressing disappointment that his former colleagues in the media are not taking a harder look at such questions. “I think unfortunately people have been blinded by ideology.”

Politicization of science

Amid a sense of urgency, fear, and frustration with Americans who refuse to fall in line with government restrictions as readily as their European or especially Asian counterparts, Mr. Berenson and Dr. Ioannidis have faced blowback for airing questions about those restrictions and the science behind them.

Mr. Berenson’s book installments have prompted criticism that he’s looking for profits at the expense of public health, which he has denied. Dr. Ioannidis’ involvement in an April antibodies study in Santa Clara, California, which purported to show that COVID-19 is much less deadly than was widely believed was discredited by other scientists due to questions about the accuracy of the test used and a BuzzFeed report that it was partially funded by JetBlue Airways’ cofounder. Dr. Ioannidis says those questions were fully addressed within two weeks in a revised version that showed with far more extensive data that the test was accurate, and adds he had been unaware of the $5,000 donation, which came through the Stanford development office and was anonymized.

The dismay grew when BuzzFeed News reported in July that a month before the Santa Clara study, he had offered to convene a small group of world-renowned scientists to meet with President Donald Trump and help him solve the pandemic “by intensifying efforts to understand the denominator of infected people (much larger than what is documented to-date)” and developing a more targeted, data-driven approach than long-term shutdowns, which he said would “jeopardiz[e] so many lives,” according to emails obtained by BuzzFeed

While the right has seized on Dr. Ioannidis’ views and some scientists say it’s hard not to conclude that his work is driven by a political agenda, the Greek doctor maintains that partisanship is antithetical to the scientific method, which requires healthy skepticism, among other things.

“Even the word ‘science’ has been politicized. It’s very sad,” he says, observing that in the current environment, scientific conclusions are used to shame, smear, and “cancel” the opposite view. “I think it’s very unfortunate to use science as a silencer of dissent.”

The average citizen, he adds, is filtering COVID-19 debates through their belief systems, media sources, and political ideology, which can leave science at a disadvantage in the public square. “Science hasn’t been trained to deal with these kinds of powerful companions that are far more vocal and better armed to penetrate into social discourse,” says Dr. Ioannidis.

The polarization has been fueled in part by absolutist pundits. In a recent week, “The Rachel Maddow Show” on MSNBC daily hammered home the rising rate in cases, trumpeted the daily death toll, and quoted Dr. Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases since 1984, while “The Tucker Carlson Show” on Fox News did not once mention government data, featuring instead anecdotes from business owners who have been affected by the shutdowns and calling into question the authority of unelected figures such as Dr. Fauci.

Fed on different media diets, it’s not surprising that partisan views on the severity of the pandemic have diverged further in recent months, with 85% of Democrats seeing it as a major threat – nearly double the percent of Republicans, according to a Pew Research poll from mid-July. And in a related division that predates the pandemic, another Pew poll from February showed that Republicans are less likely to support scientists taking an active role in social policy matters – just 43% compared with 73% for Democrats and Democratic-leaning independents.

“If you have more of a populist type of worldview, where you are concerned that elites and scientists and officials act in their own interests first, it becomes very easy to make assumptions that they are doing something to control the population,” says Prof. Asheley Landrum, a psychologist at Texas Tech University who specializes in science communication.

Beyond following the science

Determining what exactly “the science” says is only one part of the equation; figuring out precisely how to “follow” it poses another set of challenges for policymakers on questions like whether to send students back to school.

“Even if you had all the science pinned down, there are still some tough value judgments about the dangers of multiplying the pandemic or the dangers of keeping kids at home,” says Dr. Holdren, President Obama’s science adviser, an engineer and physicist who now co-directs the science, technology, and public policy program at Harvard Kennedy School.

Dr. Lipsitch echoes that point and offers an example of two schools that both have a 10% risk of an outbreak. In one, where there are older students from high-income families who are more capable of learning remotely, leaders may decide that the 10% risk isn’t worth reopening. But in another school with the same assessed risk, where the students are younger and many depend on free and reduced lunch, a district may decide the risk is a trade-off they’re willing to make in support of the students’ education and well-being.

“Following the science just isn’t enough,” says Dr. Lipsitch. “It’s incumbent on responsible leaders to use science to do the reasoning about how to do the best thing given your values, but it’s not an answer.”

Newly Identified Social Trait Could Explain Why Some People Are Particularly Tribal (Science Alert)

PETER DOCKRILL 19 AUGUST 2020

Having strong, biased opinions may say more about your own individual way of behaving in group situations than it does about your level of identification with the values or ideals of any particular group, new research suggests.

This behavioural trait – which researchers call ‘groupiness’ – could mean that individuals will consistently demonstrate ‘groupy’ behaviour across different kinds of social situations, with their thoughts and actions influenced by simply being in a group setting, whereas ‘non-groupy’ people aren’t affected in the same way.

“It’s not the political group that matters, it’s whether an individual just generally seems to like being in a group,” says economist and lead researcher Rachel Kranton from Duke University.

“Some people are ‘groupy’ – they join a political party, for example. And if you put those people in any arbitrary setting, they’ll act in a more biased way than somebody who has the same political opinions, but doesn’t join a political party.”

In an experiment with 141 people, participants were surveyed on their political affiliations, which identified them as self-declared Democrats or Republicans, or as subjects who leaned more Democrat or Republican in terms of their political beliefs (called Independents, for the purposes of the study).

They also took part in a survey that asked them a number of seemingly neutral questions about their aesthetic preferences in relation to a series of artworks, choosing favourites among similar-looking paintings or different lines of poetry.

After these exercises, the participants took part in tests where they were placed in groups –  either based around political affiliations (Democrats or Republicans), or more neutral categorisations reflecting their answers about which artworks they preferred. In a third test, the groups were random.

While in these groups, the participants ran through an income allocation exercise, in which they could choose to allocate various amounts of money to themselves, to fellow group members, or to members of the other group.

The researchers expected to find bias in terms of these income allocations based around political mindsets, with people giving themselves more money, along with people who shared their political persuasion. But they also found something else.

“We compare Democrats with D-Independents and find that party members do show more in-group bias; on average, their choices led to higher income for in-group participants,” the authors explain in their study.

“Yet, these party-member participants also show more in-group bias in a second nonpolitical setting. Hence, identification with the group is not necessarily the driver of in-group bias, and the analysis reveals a set of subjects who consistently shows in-group bias, while another does not.”

According to the data, there exists a subpopulation of ‘groupy’ people and a subpopulation of ‘non-groupy’ people – actions of the former type are influenced by being in group settings, in which case they are more likely to demonstrate bias against others outside their group.

By contrast, the latter type, non-groupy individuals, don’t display this kind of tendency, and are more likely to act the same way, regardless of whether or not they’re in a group setting. These non-groupy individuals also seem to make faster decisions than groupy people, the team found.

“We don’t know if non-groupy people are faster generally,” Kranton says.

“It could be they’re making decisions faster because they’re not paying attention to whether somebody is in their group or not each time they have to make a decision.”

Of course, as illuminating as the discovery of this apparent trait is, we need a lot more research to be sure we’ve identified something discrete here.

After all, this is a pretty small study all told, and the researchers acknowledge the need to conduct the same kind of experiments with participants in several settings, to support the foundations of their groupiness concept, and to try to identify what it is that predisposes people to this kind of groupy or non-groupy mindset.

“There’s some feature of a person that causes them to be sensitive to these group divisions and use them in their behaviour across at least two very different contexts,” one of the team, Duke University psychologist Scott Huettel, explains.

“We didn’t test every possible way in which people differentiate themselves; we can’t show you that all group-minded identities behave this way. But this is a compelling first step.”

The findings are reported in PNAS.