Technologically speaking, we live in a time of plenty. Today, I can ask a chatbot to render TheCanterbury Tales as if written by Taylor Swift or to help me write a factually inaccurate autobiography. With three swipes, I can summon almost everyone listed in my phone and see their confused faces via an impromptu video chat. My life is a gluttonous smorgasbord of information, and I am on the all-you-can-eat plan. But there is one specific corner where technological advances haven’t kept up: weather apps.
Weather forecasts are always a game of prediction and probabilities, but these apps seem to fail more often than they should. At best, they perform about as well as meteorologists, but some of the most popular ones fare much worse. The cult favorite Dark Sky, for example, which shut down earlier this year and was rolled into the Apple Weather app, accurately predicted the high temperature in my zip code only 39 percent of the time, according to ForecastAdvisor, which evaluates online weather providers. The Weather Channel’s app, by comparison, comes in at 83 percent. The Apple app, although not rated by ForecastAdvisor, has a reputation for off-the-mark forecasts and has been consistently criticized for presenting faulty radar screens, mixing up precipitation totals, or, as it did last week, breaking altogether. Dozens of times, the Apple Weather app has lulled me into a false sense of security, leaving me wet and betrayed after a run, bike ride, or round of golf.
People love to complain about weather forecasts, dating back to when local-news meteorologists were the primary source for those planning their morning commutes. But the apps have produced a new level of frustration, at least judging by hundreds of crankytweets over the past decade. Nearly two decades into the smartphone era—when anyone can theoretically harness the power of government weather data and dissect dozens of complex, real-time charts and models—we are still getting caught in the rain.
Weather apps are not all the same. There are tens of thousands of them, from the simply designed Apple Weather to the expensive, complex, data-rich Windy.App. But all of these forecasts are working off of similar data, which are pulled from places such as the National Oceanic and Atmospheric Administration (NOAA) and the European Centre for Medium-Range Weather Forecasts. Traditional meteorologists interpret these models based on their training as well as their gut instinct and past regional weather patterns, and different weather apps and services tend to use their own secret sauce of algorithms to divine their predictions. On an average day, you’re probably going to see a similar forecast from app to app and on television. But when it comes to how people feel about weather apps, these edge cases—which usually take place during severe weather events—are what stick in a person’s mind. “Eighty percent of the year, a weather app is going to work fine,” Matt Lanza, a forecaster who runs Houston’s Space City Weather, told me. “But it’s that 20 percent where people get burned that’s a problem.”
No people on the planet have a more tortured and conflicted relationship with weather apps than those who interpret forecasting models for a living. “My wife is married to a meteorologist, and she will straight up question me if her favorite weather app says something different than my forecast,” Lanza told me. “That’s how ingrained these services have become in most peoples’ lives.” The basic issue with weather apps, he argues, is that many of them remove a crucial component of a good, reliable forecast: a human interpreter who can relay caveats about models or offer a range of outcomes instead of a definitive forecast.
Lanza explained the human touch of a meteorologist using the example of a so-called high-resolution forecasting model that can predict only 18 hours out. It is generally quite good, he told me, at predicting rain and thunderstorms—“but every so often it runs too hot and over-indexes the chances of a bad storm.” This model, if left to its own devices, will project showers and thunderstorms blanketing the region for hours when, in reality, the storm might only cause 30 minutes of rain in an isolated area of the mapped region. “The problem is when you take the model data and push it directly into the app with no human interpretation,” he said. “Because you’re not going to get nuance from these apps at all. And that can mean a difference between a chance of rain all day and it’s going to rain all day.”
But even this explanation has caveats; all weather apps are different, and their forecasts have varying levels of sophistication. Some pipe model data right in, whereas others are curated using artificial intelligence. Peter Neilley, the Weather Channel’s director of weather forecasting sciences and technologies, said in an email that the company’s app incorporates “billions of weather data points,” adding that “our expert team of meteorologists does oversee and correct the process as needed.”
Weather apps might be less reliable for another reason too. When it comes to predicting severe weather such as snow, small changes in atmospheric moisture—the type of change an experienced forecaster might notice—can cause huge variances in precipitation outcomes. An app with no human curation might choose to average the model’s range of outcomes, producing a forecast that doesn’t reflect the dynamic situation on the ground. Or consider cities with microclimates: “Today, in Chicago, the lakefront will sit in the lower 40s, and the suburbs will be 50-plus degrees,” Greg Dutra, a meteorologist at ABC 7 Chicago, told me. “Often, the difference is even more stark—20-degree swings over just miles.” These sometimes subtle temperature disparities can mean very different forecasts for people living in the same region—something that one-size-fits-all weather apps don’t always pick up.
Naturally, meteorologists think that what they do is superior to forecasting by algorithm alone, but even weather-app creators told me that the challenges are real. “It’s impossible for a weather-data provider to be accurate everywhere in the world,” Brian Mueller, the founder of the app Carrot Weather, told me. His solution to the problem of app-based imprecision is to give users more ability to choose what they see when they open Carrot, letting them customize what specific weather information the app surfaces as well as what data sources the app will draw from. Mueller said that he learned from Dark Sky’s success how important beautiful, detailed radar maps were—both as a source of weather data and for entertainment purposes. In fact, meteorology seems to be only part of the allure when it comes to building a beloved weather app. Carrot has a pleasant design interface, with bright colors and Easter eggs scattered throughout, such as geography challenges based off of its weather maps. He’s also hooked Carrot up to ChatGPT to allow people to chat with the app’s fictional personality.
But what if these detailed models and dizzying maps, in the hands of weather rubes like myself, are the real problem? “The general public has access to more weather information than ever, and I’d posit that that’s a bad thing,” Chris Misenis, a weather-forecasting consultant in North Carolina who goes by the name “Weather Moose,” told me. “You can go to PivotalWeather.com right now and pull up just about any model simulation you want.” He argues that these data are fine to look at if you know how to interpret them, but for people who aren’t trained to analyze them, they are at best worthless and at worst dangerous.
In fact, forecasts are better than ever, Andrew Blum, a journalist and the author of the book The Weather Machine: A Journey Inside the Forecast, told me. “But arguably, we are less prepared to understand,” he said, “and act upon that improvement—and a forecast is only as good as our ability to make decisions with it.” Indeed, even academic research around weather apps suggests that apps fail worst when they give users a false sense of certainty around forecasting. A 2016 paper for the Royal Meteorological Society argued that “the current way of conveying forecasts in the most common apps is guilty of ‘immodesty’ (‘not admitting that sometimes predictions may fail’) and ‘impoverishment’ (‘not addressing the broader context in which forecasts … are made’).”
The conflicted relationship that people have with weather apps may simply be a manifestation of the information overload that dominates all facets of modern life. These products grant anyone with a phone access to an overwhelming amount of information that can be wildly complex. Greg Dutra shared one such public high-resolution model from the NOAA with me that was full of indecipherable links to jargony terms such as “0-2 km max vertical vorticity.” Weather apps seem to respond mostly to this fire hose of data in two ways: By boiling them down to a reductive “partly sunny” icon, or by bombarding the user with information they might not need or understand. At its worst, a modern weather app seems to flatter people, entrusting them to do their own research even if they’re not equipped. I’m not too proud to admit that some of the fun of toying around with Dark Sky’s beautiful radar or Windy.App’s endless array of models is the feeling of role-playing as a meteorologist. But the truth is that I don’t really know what I’m looking at.
What people seem to be looking for in a weather app is something they can justify blindly trusting and letting into their lives—after all, it’s often the first thing you check when you roll over in bed in the morning. According to the 56,400 ratings of Carrot in Apple’s App Store, its die-hard fans find the app entertaining and even endearing. “Love my psychotic, yet surprisingly accurate weather app,” one five-star review reads. Although many people need reliable forecasting, true loyalty comes from a weather app that makes people feel good when they open it.
Our weather-app ambivalence is a strange pull between feeling grateful for instant access to information and simultaneously navigating a sense of guilt and confusion about how the experience is also, somehow, dissatisfying—a bit like staring down Netflix’s endless library and feeling as if there’s nothing to watch. Weather apps aren’t getting worse. In fact they’re only getting more advanced, inputting more and more data and offering them to us to consume. Which, of course, might be why they feel worse.
Menos de 2% das três bilhões de letras do genoma humano são dedicados às proteínas
David Cox
17 de abril de 2023
Em abril de 2003, o sequenciamento completo do “livro da vida” codificado no genoma humano foi declarado “encerrado”, após 13 anos de trabalho. O mundo estava repleto de expectativas.
Esperava-se que o Projeto Genoma Humano, depois de consumir cerca de US$ 3 bilhões (R$ 15 bilhões), trouxesse tratamentos para doenças crônicas e esclarecesse todos os detalhes determinados geneticamente sobre as nossas vidas.
Mas, enquanto as entrevistas coletivas anunciavam o triunfo desta nova era de conhecimento biológico, o manual de instruções para a vida humana já trazia consigo uma surpresa inesperada.
A convicção que prevalecia na época era que a ampla maioria do genoma humano consistiria de instruções para a produção de proteínas — os “tijolos” que constroem todos os organismos vivos e desempenham uma imensa variedade de papéis nas nossas células e entre elas.
E, com mais de 200 tipos diferentes de células no corpo humano, parecia fazer sentido que cada uma delas precisasse dos seus próprios genes para realizar suas funções necessárias.
Acreditava-se que o surgimento de conjuntos exclusivos de proteínas fosse vital na evolução da nossa espécie e dos nossos poderes cognitivos. Afinal, somos a única espécie capaz de sequenciar o nosso próprio genoma.
Mas o que descobrimos é que menos de 2% dos três bilhões de letras do genoma humano são dedicados às proteínas. Apenas cerca de 20 mil genes codificadores de proteínas foram encontrados nas longas linhas de moléculas que compõem nossas sequências de DNA.
Os geneticistas ficaram assombrados ao descobrir que os números de genes produtores de proteínas dos seres humanos são similares a algumas das criaturas mais simples do planeta. As minhocas, por exemplo, têm cerca de 20 mil desses genes, enquanto as moscas-das-frutas têm cerca de 13 mil.
Foi assim que, do dia para a noite, o mundo científico passou a enfrentar uma verdade bastante incômoda: grande parte do nosso entendimento sobre o que nos torna seres humanos talvez estivesse errada.
“Eu me lembro da incrível surpresa”, afirma o biólogo molecular Samir Ounzain, principal executivo da companhia suíça Haya Therapeutics. A empresa procura utilizar nosso conhecimento sobre a genética humana para desenvolver novos tratamentos para doenças cardiovasculares, câncer e outras enfermidades crônicas.
“Aquele foi o momento em que as pessoas começaram a se perguntar ‘será que temos um conceito errado do que é a biologia?'”
Os 98% restantes do nosso DNA ficaram conhecidos como matéria escura, ou o genoma obscuro — uma enorme e misteriosa quantidade de letras sem propósito ou significado óbvio.
Inicialmente, alguns geneticistas sugeriram que o genoma obscuro fosse simplesmente DNA lixo, uma espécie de depósito de resíduos da evolução humana. Seriam os restos de genes partidos que deixaram de ser relevantes há muito tempo.
Mas, para outros, sempre ficou claro que o genoma obscuro seria fundamental para nosso entendimento da humanidade.
“A evolução não tem absolutamente nenhuma tolerância com o lixo”, afirma Kári Stefánsson, o principal executivo da empresa islandesa deCODE Genetics, que sequenciou mais genomas inteiros do que qualquer outra instituição em todo o mundo.
Para ele, “deve haver uma razão evolutiva para manter o tamanho do genoma”.
Duas décadas se passaram e, agora, temos os primeiros indícios da função do genoma obscuro. Aparentemente, sua função primária é regular o processo de decodificação, ou expressão, dos genes produtores de proteínas.
O genoma obscuro ajuda a controlar o comportamento dos nossos genes em resposta às pressões ambientais enfrentadas pelo nosso corpo ao longo da vida, que vão desde a alimentação até o estresse, a poluição, os exercícios e a quantidade de sono. Este campo é conhecido como epigenética.
Ounzain afirma que gosta de pensar nas proteínas como o hardware que compõe a vida. Já o genoma obscuro é o software, que processa e reage às informações externas.
Por isso, quanto mais aprendemos sobre o genoma obscuro, mais compreendemos a complexidade humana e como nos tornamos quem somos hoje.
“Se você pensar em nós enquanto espécie, somos mestres da adaptação ao ambiente em todos os níveis”, afirma Ounzain. “E essa adaptação é o processamento das informações.”
“Quando você retorna à questão sobre o que nos faz ser diferentes de uma mosca ou de uma minhoca, percebemos cada vez mais que as respostas estão no genoma obscuro”, segundo ele.
Os transposons e o nosso passado evolutivo
Quando os cientistas começaram a examinar o livro da vida, em meados dos anos 2000, uma das maiores dificuldades foi o fato de que as regiões não codificadoras de proteínas do genoma humano pareciam estar repletas de sequências de DNA repetidas, conhecidas como transposons.
Essas sequências repetitivas eram tão onipresentes que compreendiam cerca da metade do genoma em todos os mamíferos vivos.
“A própria compilação do primeiro genoma humano foi mais problemática devido à presença dessas sequências repetitivas”, afirma Jef Boeke, diretor do centro médico acadêmico chamado Projeto Matéria Escura da Universidade Langone de Nova York, nos Estados Unidos.
“Analisar simplesmente qualquer tipo de sequência é muito mais fácil quando se trata de uma sequência exclusiva.”
Inicialmente, os transposons foram ignorados pelos geneticistas. A maior parte dos estudos genéticos preferiu concentrar-se puramente no exoma — a pequena região codificadora de proteínas do genoma.
Mas, ao longo da última década, o desenvolvimento de tecnologias mais sofisticadas de sequenciamento de DNA permitiu aos geneticistas estudar o genoma obscuro com mais detalhes.
Em um desses experimentos, os pesquisadores excluíram um fragmento específico de transposon de camundongos, o que fez com que a metade dos filhotes dos animais morresse antes do nascimento. O resultado demonstra que algumas sequências de transposons podem ser fundamentais para a nossa sobrevivência.
Talvez a melhor explicação sobre o motivo da existência dos transposons no nosso genoma possa ser o fato de que eles são extremamente antigos e datam das primeiras formas de vida, segundo Boeke.
Outros cientistas sugeriram que eles provêm de vírus que invadiram o nosso DNA ao longo da história humana, antes de receberem gradualmente novas funções no corpo para que tivessem algum propósito útil.
“Na maioria das vezes, os transposons são patógenos que nos infectam e podem infectar células da linha germinal, [que são] o tipo de células que transmitimos para a geração seguinte”, afirma Dirk Hockemeyer, professor assistente de biologia celular da Universidade da Califórnia em Berkeley, nos Estados Unidos.
“Eles podem então ser herdados e gerar integração estável ao genoma”, segundo ele.
Boeke descreve o genoma obscuro como um registro fóssil vivo de alterações fundamentais no nosso DNA que ocorreram há muito tempo, na história antiga.
Uma das características mais fascinantes dos transposons é que eles podem se mover de uma parte do genoma para outra — um tipo de comportamento que gerou seu nome — criando ou revertendo mutações nos genes, às vezes com consequências extraordinárias.
O movimento de um transposon para um gene diferente pode ter sido responsável, por exemplo, pela perda da cauda na grande família dos primatas, fazendo com que a nossa espécie desenvolvesse a capacidade de andar ereta.
“Aqui você tem esse evento único que teve enorme efeito sobre a evolução, gerando toda uma linhagem de grandes primatas, incluindo a nós”, segundo Boeke.
Mas, da mesma forma que nossa crescente compreensão sobre o genoma obscuro explica cada vez mais sobre a evolução, ela pode também esclarecer o motivo do surgimento das doenças.
Ounzain ressalta que, se olharmos para os estudos de associação genômica ampla (GWAS, na sigla em inglês), que pesquisam as variações genéticas entre grandes quantidades de pessoas para identificar quais delas são relacionadas a doenças, a grande maioria das variações ligadas a doenças crônicas, como a doença de Alzheimer, diabetes e doenças cardíacas, não está nas regiões de codificação de proteínas, mas sim no genoma obscuro.
O genoma obscuro e as doenças
A ilha de Panay, nas Filipinas, é mais conhecida pelas suas cintilantes areias brancas e pelo fluxo regular de turistas. Mas este local idílico esconde um segredo trágico.
Panay abriga o maior número de casos existentes no mundo de um distúrbio dos movimentos incurável, chamado distonia-parkinsonismo ligado ao X (XDP, na sigla em inglês).
Como no mal de Parkinson, as pessoas com XDP desenvolvem uma série de sintomas que afetam sua capacidade de andar e reagir rapidamente a diversas situações.
Desde a descoberta do XDP nos anos 1970, a doença só foi diagnosticada em pessoas de ascendência filipina. Este fato permaneceu um mistério por muito tempo, até que os geneticistas descobriram que todos esses indivíduos possuem a mesma variante exclusiva de um gene chamado TAF1.
O início dos sintomas parece ser causado por um transposon no meio do gene, que é capaz de regular sua função de forma a causar prejuízo ao corpo ao longo do tempo. Acredita-se que esta variante genética tenha surgido pela primeira vez cerca de 2.000 anos atrás, antes de ser transmitida e se estabelecer na população.
“O gene TAF1 é um gene essencial, ou seja, ele é necessário para o crescimento e a multiplicação de todos os tipos de células”, afirma Boeke.
“Quando você ajusta sua expressão, você tem esse defeito muito específico, que se manifesta como uma horrível forma de parkinsonismo.”
Este é um exemplo simples de como algumas sequências de DNA do genoma obscuro podem controlar a função de diversos genes, seja ativando ou reprimindo a transformação de informações genéticas em proteínas, em resposta a indicações recebidas do ambiente.
O genoma escuro também fornece instruções para a formação de diversos tipos de moléculas, conhecidas como RNAs não codificantes. Eles podem desempenhar diversos papéis, desde ajudar a fabricar algumas proteínas, bloquear a produção de outras ou ajudar a regular a atividade genética.
“Os RNAs produzidos pelo genoma obscuro agem como os maestros da orquestra, conduzindo como o seu DNA reage ao ambiente”, explica Ounzain. E estes RNAs não codificantes, agora, são cada vez mais considerados a ligação entre o genoma obscuro e diversas doenças crônicas.
A ideia é que, se fornecermos sistematicamente os sinais errados para o genoma obscuro com o nosso estilo de vida — por exemplo, com o fumo, má alimentação e inatividade —, as moléculas de RNA produzidas por ele podem fazer com que o corpo entre em um estado de doença, alterando a atividade genética, de forma a aumentar as inflamações do corpo ou promover a morte celular.
Acredita-se que certos RNAs não codificantes podem desligar ou aumentar a atividade de um gene chamado p53, que age normalmente para evitar a formação de tumores.
Em doenças complexas, como a esquizofrenia e a depressão, todo um conjunto de RNAs não codificantes pode agir em sincronia para reduzir ou aumentar a expressão de certos genes.
Mas o nosso reconhecimento cada vez maior da importância do genoma obscuro já está trazendo novos métodos de tratamento dessas doenças.
A indústria de desenvolvimento de remédios costuma se concentrar nas proteínas, mas algumas empresas estão percebendo que pode ser mais eficaz tentar interromper os RNAs não codificantes, que controlam os genes encarregados desses processos.
No campo das vacinas contra o câncer, por exemplo, as empresas realizam sequenciamento de DNA em amostras de tumores dos pacientes para tentar identificar um alvo adequado a ser atacado pelo sistema imunológico. E a maioria dos métodos concentra-se apenas nas regiões codificantes de proteínas do genoma.
Mas a empresa alemã de biotecnologia CureVac é pioneira em um método de análise das regiões não codificantes de proteínas, na esperança de encontrar um alvo que possa interromper o câncer na fonte.
Já a empresa de Ounzain, a Haya Therapeutics, atualmente está realizando um programa de desenvolvimento de drogas dirigido a uma série de RNAs não codificantes que dirigem a formação de tecidos de cicatrização, ou fibrose, no coração — um processo que pode causar insuficiência cardíaca.
Uma das esperanças é que este método possa minimizar os efeitos colaterais decorrentes de muitos remédios de uso comum.
“O problema quando medicamos as proteínas é que existem apenas cerca de 20 mil delas no corpo e a maioria é expressa em muitas células e processos diferentes, que não têm relação com a doença”, afirma Ounzain.
“Mas a atividade do genoma obscuro é extraordinariamente específica. Existem RNAs não codificantes que regulam a fibrose apenas no coração, de forma que, ao medicá-los, temos um remédio potencialmente muito seguro”, explica ele.
O desconhecido
Paralelamente, parte desse entusiasmo precisa ser atenuada pelo fato de que, em termos de compreensão do funcionamento do genoma obscuro, apenas acabamos de arranhar a superfície.
Sabemos muito pouco sobre o que os geneticistas descrevem como regras básicas: como essas sequências não codificantes de proteínas comunicam-se para regular a atividade genética? E como exatamente essas teias complexas de interações se manifestam por longos períodos de tempo até se tornarem traços de doenças, como a neurodegeneração observada no mal de Alzheimer?
“Estamos ainda no começo”, afirma Dirk Hockemeyer. “Os próximos 15 a 20 anos ainda serão assim – [iremos] identificar comportamentos específicos em células que podem gerar doenças e, em seguida, tentar identificar as partes do genoma obscuro que podem estar envolvidas na modificação desses comportamentos. Mas, agora, temos ferramentas para nos aprofundar nisso, algo que antes não tínhamos.”
Uma dessas ferramentas é a edição genética.
Jef Boeke e sua equipe estão atualmente tentando aprender mais sobre a forma de desenvolvimento dos sintomas de XDP, reproduzindo a inserção de transposons genéticos TAF1 em camundongos.
No futuro, uma versão mais ambiciosa deste projeto poderá tentar compreender como as sequências de DNA não codificantes de proteínas regulam os genes, construindo blocos de DNA sintético a partir do zero, para transplante em células de camundongos.
“Estamos agora envolvidos em pelo menos dois projetos, usando um enorme pedaço de DNA que não faz nada e tentando instalar nele todos esses elementos”, afirma Boeke.
“Colocamos um gene ali, uma sequência não codificante em frente a ele e outra mais distante, para ver como esse gene se comporta”, explica ele. “Agora, temos todas as ferramentas para realmente construir pedaços do genoma obscuro de baixo para cima e tentar entendê-lo.”
Hockemeyer prevê que, quanto mais aprendermos, mais surpresas inesperadas o livro genético da vida continuará a nos apresentar, da mesma forma que ocorreu quando o primeiro genoma foi sequenciado, 20 anos atrás.
Para ele, “as questões são muitas. O nosso genoma ainda está evoluindo ao longo do tempo? Conseguiremos decodificá-lo totalmente?”
“Ainda estamos nesse espaço escuro em aberto que estamos explorando e existem muitas descobertas realmente fantásticas à nossa espera.”
Some things can’t be said easily in polite company. They cause offence or stir up intense anxiety. Where one might expect a conversation, what actually occurs is what the sociologist Eviator Zerubavel calls a ‘socially constructed silence.’
In his book Don’t Even Think About It,George Marshall argues that after the fiasco of COP 15 at Copenhagen and ‘Climategate’—when certain sections of the press claimed (wrongly as it turned out) that leaked emails of researchers at the University of East Anglia showed that data had been manipulated—climate change became a taboo subject among most politicians, another socially constructed silence with disastrous implications for the future of climate action.
In 2013-14 we carried out interviews with leading UK climate scientists and communicators to explore how they managed the ethical and emotional challenges of their work. While the shadow of Climategate still hung over the scientific community, our analysis drew us to the conclusion that the silence Marshall spoke about went deeper than a reaction to these specific events.
Instead, a picture emerged of a community which still identified strongly with an idealised picture of scientific rationality, in which the job of scientists is to get on with their research quietly and dispassionately. As a consequence, this community is profoundly uncomfortable with the storm of political controversy that climate research is now attracting.
The scientists we spoke to were among a minority who had become engaged with policy makers, the media and the general public about their work. A number of them described how other colleagues would bury themselves in the excitement and rewards of research, denying that they had any responsibility beyond developing models or crunching the numbers. As one researcher put it, “so many scientists just want to do their research and as soon as it has some relevance, or policy implications, or a journalist is interested in their research, they are uncomfortable.”
We began to see how for many researchers, this idealised picture of scientific practice might also offer protection at an unconscious level from the emotional turbulence aroused by the politicisation of climate change.
In her classic study of the ‘stiff upper lip’ culture of nursing in the UK in the 1950s, the psychoanalyst and social researcher Isobel Menzies Lyth developed the idea of ‘social defences against anxiety,’ and it seems very relevant here. A social defence is an organised but unconscious way of managing the anxieties that are inherent in certain occupational roles. For example, the practice of what was then called the ‘task list’ system fragmented nursing into a number of routines, each one executed by a different person—hence the ‘bed pan nurse’, the ‘catheter nurse’ and so on.
Ostensibly, this was done to generate maximum efficiency, but it also protected nurses from the emotions that were aroused by any real human involvement with patients, including anxiety, something that was deemed unprofessional by the nursing culture of the time. Like climate scientists, nurses were meant to be objective and dispassionate. But this idealised notion of the professional nurse led to the impoverishment of patient care, and meant that the most emotionally mature nurses were the least likely to complete their training.
While it’s clear that social defences such as hyper-rationality and specialisation enable climate scientists to get on with their work relatively undisturbed by public anxieties, this approach also generates important problems. There’s a danger that these defences eventually break down and anxiety re-emerges, leaving individuals not only defenceless but with the additional burden of shame and personal inadequacy for not maintaining that stiff upper lip. Stress and burnout may then follow.
Although no systematic research has been undertaken in this area, there is anecdotal evidence of such burnout in a number of magazine articles like those by Madeleine Thomas and Faith Kearns, in which climate scientists speak out about the distress that they or others have experienced, their depression at their findings, and their dismay at the lack of public and policy response.
Even if social defences are successful and anxiety is mitigated, this very success can have unintended consequences. By treating scientific findings as abstracted knowledge without any personal meaning, climate researchers have been slow to take responsibility for their own carbon footprints, thus running the risk of being exposed for hypocrisy by the denialist lobby. One research leader candidly reflected on this failure: “Oh yeah and the other thing [that’s] very, very important I think is that we ought to change the way we do research so we’re sustainable in the research environment, which we’re not now because we fly everywhere for conferences and things.”
The same defences also contribute to the resistance of most climate scientists to participation in public engagement or intervention in the policy arena, leaving these tasks to a minority who are attacked by the media and even by their own colleagues. One of our interviewees who has played a major role in such engagement recalled being criticised by colleagues for “prostituting science” by exaggerating results in order to make them “look sexy.”“You know we’re all on the same side,” she continued, “why are we shooting arrows at each other, it is ridiculous.”
The social defences of logic, reason and careful debate were of little use to the scientific community in these cases, and their failure probably contributed to internal conflicts and disagreements when anxiety could no longer be contained—so they found expression in bitter arguments instead. This in turn makes those that do engage with the public sphere excessively cautious, which encourages collusion with policy makers who are reluctant to embrace the radical changes that are needed.
As one scientist put it when discussing the goal agreed at the Paris climate conference of limiting global warming to no more than 2°C: “There is a mentality in [the] group that speaks to policy makers that there are some taboo topics that you cannot talk about. For instance the two degree target on climate change…Well the emissions are going up like this (the scientist points upwards at a 45 degree angle), so two degrees at the moment seems completely unrealistic. But you’re not allowed to say this.”
Worse still, the minority of scientists who are tempted to break the silence on climate change run the risk of being seen as whistleblowers by their colleagues. Another research leader suggested that—in private—some of the most senior figures in the field believe that the world is heading for a rise in temperature closer to six degrees than two.
“So repeatedly I’ve heard from researchers, academics, senior policy makers, government chief scientists, [that] they can’t say these things publicly,” he told us, “I’m sort of deafened, deafened by the silence of most people who work in the area that we work in, in that they will not criticise when there are often evidently very political assumptions that underpin some of the analysis that comes out.”
It seems that the idea of a ‘socially constructed silence’ may well apply to crucial aspects of the interface between climate scientists and policy makers. If this is the case then the implications are very serious. Despite the hope that COP 21 has generated, many people are still sceptical about whether the rhetoric of Paris will be translated into effective action.
If climate change work is stuck at the level of ‘symbolic policy making’—a set of practices designed to make it look as though political elites are doing something while actually doing nothing—then it becomes all the more important for the scientific community to find ways of abandoning the social defences we’ve described and speak out as a whole, rather than leaving the task to a beleaguered and much-criticised minority.
For the last 60 years or so, science has been running an experiment on itself. The experimental design wasn’t great; there was no randomization and no control group. Nobody was in charge, exactly, and nobody was really taking consistent measurements. And yet it was the most massive experiment ever run, and it included every scientist on Earth.
Most of those folks didn’t even realize they were in an experiment. Many of them, including me, weren’t born when the experiment started. If we had noticed what was going on, maybe we would have demanded a basic level of scientific rigor. Maybe nobody objected because the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”
This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.
(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)
That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.
Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.
The results are in. It failed.
Peer review was a huge, expensive intervention. By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system, which is a big chunk of time when people are trying to do things like cure cancer and stop climate change. And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.
Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.
It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning physics discoveries from the 1990s and 2000s because there aren’t enough of them.
Of course, a lot of other stuff has changed since World War II. We did a terrible job running this experiment, so it’s all confounded. All we can say from these big trends is that we have no idea whether peer review helped, it might have hurt, it cost a ton, and the current state of the scientific literature is pretty abysmal. In this biz, we call this a total flop.
What went wrong?
Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?
It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.
In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published. Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:
Why don’t reviewers catch basic errors and blatant fraud? One reason is that they almost never look at the data behind the papers they review, which is exactly where the errors and fraud are most likely to be. In fact, most journals don’t require you to make your data public at all. You’re supposed to provide them “on request,” but most people don’t. That’s how we’ve ended up in sitcom-esque situations like ~20% of genetics papers having totally useless data because Excel autocorrected the names of genes into months and years.
(When one editor started asking authors to add their raw data after they submitted a paper to his journal, half of them declined and retracted their submissions. This suggests, in the editor’s words, “a possibility that the raw data did not exist from the beginning.”)
The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶
Here’s another way that we can test whether peer review worked: did it actually earn scientists’ trust?
Scientists often say they take peer review very seriously. But people say lots of things they don’t mean, like “It’s great to e-meet you” and “I’ll never leave you, Adam.” If you look at what scientists actually do, it’s clear they don’t think peer review really matters.
First: if scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal. This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.
Second: once a paper gets published, we shred the reviews. A few journals publish reviews; most don’t. Nobody cares to find out what the reviewers said or how the authors edited their paper in response, which suggests that nobody thinks the reviews actually mattered in the first place.
And third: scientists take unreviewed work seriously without thinking twice. We read “preprints” and working papers and blog posts, none of which have been published in peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed. We go to conferences where people give talks about unvetted projects, and we do not turn to each other and say, “So interesting! I can’t wait for it to be peer reviewed so I can find out if it’s true.”
Instead, scientists tacitly agree that peer review adds nothing, and they make up their minds about scientific work by looking at the methods and results. Sometimes people say the quiet part loud, like Nobel laureate Sydney Brenner:
I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.
It’s easy to imagine how things could be better—my friend Ethan and I wrote a whole paper on it—but that doesn’t mean it’s easy to make things better. My complaints about peer review were a bit like looking at the ~35,000 Americans who die in car crashes every year and saying “people shouldn’t crash their cars so much.” Okay, but how?
Lack of effort isn’t the problem: remember that our current system requires 15,000 years of labor every year, and it still does a really crappy job. Paying peer reviewers doesn’tseem to make them any better. Neither does training them. Maybe we can fix some things on the margins, but remember that right now we’re publishing papers that use capital T’s instead of error bars, so we’ve got a long, long way to go.
What if we made peer review way stricter? That might sound great, but it would make lots of other problems with peer review way worse.
For example, you used to be able to write a scientific paper with style. Now, in order to please reviewers, you have to write it like a legal contract. Papers used to begin like, “Help! A mysterious number is persecuting me,” and now they begin like, “Humans have been said, at various times and places, to exist, and even to have several qualities, or dimensions, or things that are true about them, but of course this needs further study (Smergdorf & Blugensnout, 1978; Stikkiwikket, 2002; von Fraud et al., 2018b)”.
This blows. And as a result, nobody actually reads these papers. Some of them are like 100 pages long with another 200 pages of supplemental information, and all of it is written like it hates you and wants you to stop reading immediately. Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he. “Whenever someone tells me they loved my paper,” he said, “I say thank you, even though I know they didn’t read it.” Stricter peer review would mean even more boring papers, which means even fewer people would read them.
Making peer review harsher would also exacerbate the worst problem of all: just knowing that your ideas won’t count for anything unless peer reviewers like them makes you worse at thinking. It’s like being a teenager again: before you do anything, you ask yourself, “BUT WILL PEOPLE THINK I’M COOL?” When getting and keeping a job depends on producing popular ideas, you can get very good at thought-policing yourself into never entertaining anything weird or unpopular at all. That means we end up with fewer revolutionary ideas, and unless you think everything’s pretty much perfect right now, we need revolutionary ideas real bad.
On the off chance you do figure out a way to improve peer review without also making it worse, you can try convincing the nearly 30,000 scientific journals in existence to apply your magical method to the ~4.7 million articles they publish every year. Good luck!
Peer review doesn’t work and there’s probably no way to fix it. But a little bit of vetting is better than none at all, right?
I say: no way.
Imagine you discover that the Food and Drug Administration’s method of “inspecting” beef is just sending some guy (“Gary”) around to sniff the beef and say whether it smells okay or not, and the beef that passes the sniff test gets a sticker that says “INSPECTED BY THE FDA.” You’d be pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he’s going to miss most of the dangerous meat. This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.
That’s what our current system of peer review does, and it’s dangerous. That debunked theory about vaccines causing autism comes from a peer-reviewed paper in one of the most prestigious journals in the world, and it stayed there for twelve years before it was retracted. How many kids haven’t gotten their shots because one rotten paper made it through peer review and got stamped with the scientific seal of approval?
If you want to sell a bottle of vitamin C pills in America, you have to include a disclaimer that says none of the claims on the bottle have been evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement on every paper: “NOBODY HAS REALLY CHECKED WHETHER THIS PAPER IS TRUE OR NOT. IT MIGHT BE MADE UP, FOR ALL WE KNOW.” That would at least give people the appropriate level of confidence.
Why did peer review seem so reasonable in the first place?
I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.
But science is a strong-link problem: progress depends on the quality of our best work.Better ideas don’t always triumph immediately, but they do triumph eventually, because they’re more useful. You can’t land on the moon using Aristotle’s physics, you can’t turn mud into frogs using spontaneous generation, and you can’t build bombs out of phlogiston. Newton’s laws of physics stuck around; his recipe for the Philosopher’s Stone didn’t. We didn’t need a scientific establishment to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time did the rest.
If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.
Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.
Nobody was in charge of our peer review experiment, which means nobody has the responsibility of saying when it’s over. Seeing no one else, I guess I’ll do it:
We’re done, everybody! Champagne all around! Great work, and congratulations. We tried peer review and it didn’t work.
Honesty, I’m so relieved. That system sucked! Waiting months just to hear that an editor didn’t think your paper deserved to be reviewed? Reading long walls of text from reviewers who for some reason thought your paper was the source of all evil in the universe? Spending a whole day emailing a journal begging them to let you use the word “years” instead of always abbreviating it to “y” for no reason (this literally happened to me)? We never have to do any of that ever again.
I know we all might be a little disappointed we wasted so much time, but there’s no shame in a failed experiment. Yes, we should have taken peer review for a test run before we made it universal. But that’s okay—it seemed like a good idea at the time, and now we know it wasn’t. That’s science! It will always be important for scientists to comment on each other’s ideas, of course. It’s just this particular way of doing it that didn’t work.
What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.
Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.
Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lotofcomments in particular. So I dunno, I guess that seems like a good way of doing it?
I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.
Greenhouse gases are among the chief causes of global warming and climate change. Getty Images
An international team led by Oregon State University researchers says in a report published today that the Earth’s vital signs have reached “code red” and that “humanity is unequivocally facing a climate emergency.”
In the special report, “World Scientists’ Warning of a Climate Emergency 2022,” the authors note that 16 of 35 planetary vital signs they use to track climate change are at record extremes. The report’s authors share new data illustrating the increasing frequency of extreme heat events and heat-related deaths, rising global tree cover loss because of fires, and a greater prevalence of insects and diseases thriving in the warming climate. Food insecurity and malnutrition caused by droughts and other climate-related extreme events in developing countries are increasing the number of climate refugees.
William Ripple, a distinguished professor in the OSU College of Forestry, and postdoctoral researcher Christopher Wolf are the lead authors of the report, and 10 other U.S. and global scientists are co-authors.
“Look at all of these heat waves, fires, floods and massive storms,” Ripple said. “The specter of climate change is at the door and pounding hard.”
“As we can see by the annual surges in climate disasters, we are now in the midst of a major climate crisis, with far worse to come if we keep doing things the way we’ve been doing them,” Wolf said.
“As Earth’s temperatures are creeping up, the frequency or magnitude of some types of climate disasters may actually be leaping up,” said the University of Sydney’s Thomas Newsome, a co-author of the report. “We urge our fellow scientists around the world to speak out on climate change.”
“The Scientist’s Warning” is a documentary by the research team summarizing the report’s results and can be watched online:
In a 2018 survey, over half of a sample of Americans reported a psi experience; a 2022 Brazilian survey revealed 70% had a precognitive dream.
Some scientists will not engage with the evidence for psi due to scientism.
The ideology of “scientism” is often associated with science, but leads to a lack of open-mindedness, which is contrary to true science.
Psi phenomena, like telepathy and precognition, are controversial in academia. While a minority of academics (such as me) are open-minded about them, others believe that they are pseudo-scientific and that they can’t possibly exist because they contravene the laws of science.
However, the phenomena are much less controversial to the general public. Surveys show significant levels of belief in psi. A survey of 1200 Americans in 2003 found that over 60% believed in extrasensory perception.1
This high level of belief appears to stem largely from experience. In a 2018 survey, half of a sample of Americans reported they had an experience of feeling “as though you were in touch with someone when they were far away.” Slightly less than half reported an experience of knowing “something about the future that you had no normal way to know” (in other words, precognition). Just over 40% reported that they had received important information through their dreams.2
Interestingly, a 2022 survey of over 1000 Brazilian people found higher levels of such anomalous experiences, with 70% reporting they had a precognitive dream at least once.3 This may imply that such experiences are more likely to be reported in Brazil, perhaps due to a cultural climate of greater openness.
How can we account for the disconnect between the dismissal of psi phenomena by some scientists, and the openness of the general population? Is it that scientists are more educated and rational than other sections of the population, many of whom are gullible to superstition and irrational thinking?
I don’t think it’s as simple as this.
Evidence for Psi
You might be surprised to learn that the evidence for phenomena such as telepathy and precognition is strong. As I point out in my book, Spiritual Science, this evidence has remained significant and robust over a massive range of studies over decades.
In 2018, American Psychologist published an article by Professor Etzel Cardeña which carefully and systemically reviewed the evidence for psi phenomena, examining over 750 discrete studies. Cardeña concluded that there was a very strong case for the existence of psi, writing that the evidence was “comparable to that for established phenomena in psychology and other disciplines.”4
For example, from 1974 to 2018, 117 experiments were reported using the “Ganzfeld” procedure, in which one participant attempts to “send” information about images to another distant person. An overall analysis of the results showed a “hit rate” many millions of times higher than chance. Factors such as selective reporting bias (the so-called “file drawer effect”) and variations in experimental quality could not account for the results. Moreover, independent researchers reported statistically identical results.5
So why do some scientists continue to believe that there is no evidence for psi? In my view, the explanation lies in an ideology that could be called “scientism.”
Scientism
Scientism is an ideology that is often associated with science. It consists of a number of basic ideas, which are often stated as facts, even though they are just assumptions—e.g., that the world is purely physical in nature, that human consciousness is a product of brain activity, that human beings are biological machines whose behaviour is determined by genes, that anomalous phenomena such as near-death experiences and psi are unreal, and so on.
Adherents to scientism see themselves as defenders of reason. They see themselves as part of a historical “enlightenment project” whose aim is to overcome superstition and irrationality. In particular, they see themselves as opponents of religion.
It’s therefore ironic that scientism has become a quasi-religion in itself. In their desire to spread their ideology, adherents to scientism often behave like religious zealots, demonising unwelcome ideas and disregarding any evidence that doesn’t fit with their worldview. They apply their notion of rationality in an extremist way, dismissing any phenomena outside their belief system as “woo.” Scientifically evidential phenomena such as telepathy and precognition are placed in the same category as creationism and conspiracy theories.
One example was a response to Eztel Cardeña’s American Psychologist article (cited above) by the longstanding skeptics Arthur Reber and James Alcock. Aiming to rebut Cardeña’s claims of the strong evidence for psi, they decided that their best approach was not to actually engage with the evidence, but simply to insist that it couldn’t possibly be valid because psi itself was theoretically impossible. As they wrote, “Claims made by parapsychologists cannot be true … Hence, data that suggest that they can are necessarily flawed and result from weak methodology or improper data analyses.”6
A similar strategy was used by the psychologist Marija Branković in a recent paper in The European Journal of Psychology. After discussing a series of highly successful precognition studies by the researcher Daryl Bem, she dismisses them because three investigators were unable to replicate the findings.7 Branković neglects to mention that there have been 90 other replication attempts with a massively significant overall success rate, exceeding the standard of “decisive evidence” by a factor of 10 million.8
Beyond Scientism
It’s worth considering for a moment whether psi really does contravene the laws of physics (or science), as many adherents to scientism suggest. For me, this is one of the most puzzling claims made by skeptics. Tellingly, the claim is often made by psychologists, whose knowledge of modern science may not be deep.
Anyone with a passing knowledge of some of the theories of modern physics—particularly quantum physics—is aware that reality is much stranger than it appears to common sense. There are many theories that suggest that our common-sense view of linear time may be false. There are many theories that suggest that our world is essentially “non-local,” including phenomena such as “entanglement” and “action at a distance.” I think it would be too much of a stretch to suggest that such theories explain precognition and telepathy, but they certainly allow for their possibility.
A lot of people assume that if you’re a scientist, then you must automatically subscribe to scientism. But in fact, scientism is the opposite of true science. The academics who dismiss psi on the grounds that it “can’t possibly be true” are behaving in the same way as the fundamentalist Christians who refuse to consider the evidence for evolution. Skeptics who refuse to engage with the evidence for telepathy or precognition are acting in the same way as the contemporaries of Galileo who refused to look through his telescope, unwilling to face the possibility that their beliefs may need to be revised.
References
1. Wahbeh H, Radin D, Mossbridge J, Vieten C, Delorme A. Exceptional experiences reported by scientists and engineers. Explore (NY). 2018 Sep;14(5):329-341. doi: 10.1016/j.explore.2018.05.002. Epub 2018 Aug 2. PMID: 30415782.
2. Rice TW. Believe It Or Not: Religious and Other Paranormal Beliefs in the United States. J Sci Study Relig. 2003;42(1):95-106. doi:10.1111/1468-5906.00163
3. Monteiro de Barros MC, Leão FC, Vallada Filho H, Lucchetti G, Moreira-Almeida A, Prieto Peres MF. Prevalence of spiritual and religious experiences in the general population: A Brazilian nationwide study. Transcultural Psychiatry. April 2022. doi:10.1177/13634615221088701
4. Cardeña, E. (2018). The experimental evidence for parapsychological phenomena: A review. American Psychologist, 73(5), 663–677. https://doi.org/10.1037/amp0000236
5. Storm L, Tressoldi P. Meta-analysis of free-response studies 2009-2018: Assessing the noise-reduction model ten years on. J Soc Psych Res. 2020;(84):193-219.
6. Reber, A. S., & Alcock, J. E. (2020). Searching for the impossible: Parapsychology’s elusive quest. American Psychologist, 75(3), 391–399. https://doi.org/10.1037/amp0000486
7. Branković M. Who Believes in ESP: Cognitive and Motivational Determinants of the Belief in Extra-Sensory Perception. Eur J Psychol. 2019;15(1):120-139. doi:10.5964/ejop.v15i1.1689
8. Bem D, Tressoldi P, Rabeyron T, Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Research. 2015;4:1188. doi:10.12688/f1000research.7177.2
Professor Nicola Clayton: “Obviously, I’m emotionally attached, so showing people the birds at the moment is very difficult.” Illustration: Peter Strain/The Observer
A pioneering research laboratory in Cambridge proves that corvids are delightfully clever. Here, its founder reveals what the crow family has taught her – and her heartbreak at the centre’s closure
Leo, an 18-year-old rook, is playing mind games. It’s a street-corner classic – cups and balls. Only this time the venue is the Comparative Cognition Laboratory in Madingley, Cambridge, and the ball is a waxworm. Leo – poised, pointy, determined – is perched on a wooden platform eager to place his bet. A wriggling morsel is laid under one of three cups, the cups shuffled. Leo cocks his head and takes a stab. Success! He snatches the waxworm in his beak and retreats to enjoy his prize. Aristotle, a fellow resident donned in a glossy black feather coat, who has been at the aviary almost as long as the lab itself, looks on knowingly.
Watching alongside me is Professor Nicola Clayton, a psychologist who founded the lab 22 years ago, and we are joined by Francesca Cornero, 25, a PhD researcher (and occasional cups and balls technician). Clayton, 59, who is short, with blonde hair, large glasses and is wearing loose, black tango trousers, studies the cognitive abilities of both animals and humans, but is particularly known for her seminal research into the intelligence of corvids (birds in the crow family, which includes rooks, jays, magpies and ravens). Corvids have long proved to be at odds with the “bird-brain” stereotype endured by most feathered creatures and her lab, a cluster of four large aviaries tucked behind a thatched pub, has paved the way for new theories about the evolution and development of intelligence. Thanks to Clayton’s own eclectic tastes, which span consciousness to choreography (her other love, besides birds, is dance), the lab also engenders a curious synthesis of ideas drawn from both science and the arts.
For Clayton, who has hand-reared many of the 25 jays and four rooks that live at the lab herself, the birds are like family. She introduces me to Hoy and Romero, a pair of Eurasian jays, and greets her test subjects with affection. “Hello, sweetpeas,” she says, in a sing-song soprano. “I love you.” Hoy responds by blowing kisses: a squeaky mwah mwah. Many corvids, like parrots, can mimic human speech. One of Clayton’s fondest memories of the lab is when a young Romero said: “I love you,” back. To Clayton, the Comparative Cognition Lab is more than just an aviary, or a place of scientific research. It’s a “corvid palace”. And having presided over it for more than two decades, Clayton, undoubtedly, is its queen.
But all is not well in her kingdom. Last year she learned that the lab would not have its grant renewed by the European Research Council. Her application had been made amid the turmoil of Brexit and Clayton believes she is now among a growing number of academics facing funding complications as a result of the UK’s departure from the EU. The pandemic has only exacerbated the challenge of finding alternative financing. And while the university has supported the lab in the meantime, at the end of July, this money is also due to cease. Without a benefactor, Clayton’s lab is on borrowed time. The corvid palace faces closure. Her clever birds, released or rehomed. A lab that has transformed our understanding of animal cognition – and continues to reveal new secrets – soon may no longer exist. “Obviously, I’m emotionally attached,” she says, looking fondly up at Hoy and Romero, “so showing people the birds at the moment is very difficult.”
‘You wonder what’s going on behind their beady eyes’: Professor Nicola Clayton has run the Comparative Cognition Lab for 22 years. Photograph: Nasir Kachroo/Rex/Shutterstock
In many ways, humans have always suspected something was up with corvids. As Clayton puts it: “You wonder what’s going on behind that beady eye, don’t you?” These birds are shrouded in mysticism and intrigue. Corvids feature prominently in folklore, often depicted as prophetic, tricksters, or thieves. Ravens keep the Tower of London from falling down, and we count magpies to glimpse our fortune. In his poem of the same name, Edgar Allan Poe chose a raven – a talking bird – to accompany his narrator’s descent into madness, and few images are quite as ominous as the conspiring flock of crows gathering on a climbing frame in Alfred Hitchcock’s The Birds. The semiotics of corvids are rooted in an innate sense that the birds are intelligent. Here, Clayton has been able to test some of the true reaches of their mental capacities.
One of the big questions for her concerned “mental time travel” – the ability to remember the past or plan for the future. “People assumed this is something that only humans have,” she says. “That animals didn’t have these experiential memories that require us to project the self in time.” Clayton had already found that scrub jays showed evidence of episodic memory – remembering not only where, but when they had hidden food. But, at Madingley, she observed that jays were also capable of thinking about the future. A study conducted with Dr Nathan Emery, a fellow researcher in animal cognition (and her husband), found that a jay with prior experience as a thief was more cautious when hiding its food – if a thieving bird knew it was being watched when it was caching, it would move the food to a new hiding place later. Birds that had not previously stolen food for themselves remained blissfully ignorant. It seemed that jays could not only relate to a previous experience, but put themselves in the eyes of another bird and make decisions based on the possibility of future events. The results of the study were published in Nature in 2001. It was, Clayton says, a “gamechanger”.
Another experiment at the lab conducted by Chris Bird, a PhD student, drew on the rich cultural heritage of corvids for inspiration. Its starting point was Aesop’s fable, The Crow and the Pitcher. The study found that – just like the “clever crow” – rooks were capable of manipulating water by dropping rocks in it until food was raised within reach of its beak. Another experiment found that rooks – which don’t use tools in the natural habitat – could use their creativity to make task-specific tools, such as bending wire into a hook to lever a small bucket out of a tube. “I always had a big respect for birds,” Clayton says. “But I was stunned by how intelligent they were.”
Studies such as these have helped establish that animals which followed a different evolutionary path to humans were in fact capable of intelligent thought – that intelligence evolved independently in separate groups. To Clayton, corvids are as intelligent as chimpanzees, and her research into these “feathered apes” has shaped the thinking of many academics in the field. Henry Gee, an evolutionary biologist and a senior editor at Nature, told me that Clayton has proved that intelligence has nothing much to do with how brains are wired, or even how big they are. “She has shown that corvids are capable of a ‘theory of mind’. They can conceive of themselves as agents in their own lives. They can plot, plan, scheme and even lie, something human beings cannot do until they reach the age of about three. In other words, corvids think very much like we do.”
‘Corvids can plot, plan, scheme and even lie. They think like we do.’ Photograph: Arterra Picture Library/Alamy
As news that the lab faces closure has rippled through the scientific community, the reaction has been of sadness and dismay. An open letter signed by 358 academics from around the world has called on the university to reconsider. One signatory, Alex Thornton, a professor of cognitive evolution at Exeter University, said it would represent an act of “scientific vandalism and monumental self-sabotage”. Gee said it showed a “lack of intelligence”. Emery told me that creating something similar somewhere else would be pretty difficult, “if not impossible”, and incredibly expensive. “These birds cannot be purchased ‘off the shelf’,” he said. “If Nicky’s corvid lab closes down, then it couldn’t really start up again.” As the letter states, the lab at Madingley is the only one of its kind in the UK, and remains “globally unique in its size and capability”.
For Jonathan Birch, an associate professor at LSE, it is this years-long approach that makes Clayton’s lab so significant. “I see some big cultural problems in science as it is now, with a focus on the short term,” he told me. “All around the world, not just in Cambridge, this is squeezing out funding for long-term studies. Clayton’s lab shows us a different way of doing animal research: an approach where we see animals for what they are – sentient beings with their own individual lives to lead. And where we study them over the long term to find out how they think and solve problems. The international significance of the lab is hard to overstate. Its closure would be a terrible loss to the sciences of mind and brain.”
In a statement, Cambridge University praised Clayton’s work, but said that continued investment was “not sustainable at a time of rapidly rising costs and when funds could otherwise be allocated to support the research of early- and midcareer academics”. It added that it would be “delighted” to work with an external funder to keep the aviaries open, should one emerge in the next few months. It is hard to put a precise figure on what it would cost to keep the lab open in the long run, but Clayton estimates it could cost £300,000 to £500,000 to secure the birds for another five or six years. She has received some partial offers from potential donors, though nothing has been confirmed.
Clayton’s work remains pivotal in changing how we think about animals. As the New Scientist reported, studies conducted at her lab are “part of a renaissance in our understanding of the cognition of other creatures… but there is still much more to learn”. And to learn from animals in this way is a slow process. These sorts of experiments, says Clayton, require years of preparation. You can’t just teach any old crow new tricks (well, perhaps you can, but it wouldn’t be scientifically valid). The corvids cannot be wild caught, as researchers would not know the prior experiences of the bird. For these sorts of experiments, the birds must be handraised in controlled conditions. It also takes considerable time to build up the trust required to run an experiment. “It’s a privilege,” says Clayton, “to get the opportunity to see inside their minds, and for them to trust us enough to share what they know with us.”
‘It’s a privilege to get the opportunity to see inside their minds, and for them to trust us enough to share what they know with us’: Professor Nicola Clayton. Photograph: Dan Burn-Forti/The Observer
Cornero, who is researching how rooks understand language, tells me that it took a year before she could start working effectively with Hoy. She has now taught him to respond to a number of verbal commands. When she says, “Come,” he comes. When she says, “Speak,” he mumbles something in corvid. It raises further questions about our assumptions of which animals we consider “smart”; if a rook can be trained much like a dog, then is domestication really a prerequisite to “intelligent” behaviours? “In the context of conservation and the climate disaster,” says Cornero, “I think it’s really important for humans to be increasingly aware that we aren’t the only ones that think and feel and exist in this space.”
If anyone is equipped to bring these ideas into the public consciousness, it’s Clayton. She has always had a knack for creating tantalising work – for nurturing a creative frisson around different ideas, approaches and perspectives. For inspiring new thought. She is the first scientist in residence at the Rambert School of Ballet and Contemporary Dance and has a long-term collaboration with the artist Clive Wilkins, who is a member of the magician’s circle (and her tango partner).
“Magic reveals a lot about the blind spots we have,” says Clayton, and lately magic has opened up a new line of inquiry for the lab. Last year, a study led by Elias Garcia-Pelegrin used magicians’ sleight of hand as a means to test the perceptual abilities of jays. You don’t have to be an evolutionary biologist or an expert in animal cognition to find these experiments alluring.
Much like a magic trick, this research leaves you with more questions than answers, but now Clayton is reluctantly preparing her birds for departure. The younger birds are being readied to be released into the wild. The others have all, thankfully, been found suitable homes; and the rooks may continue their lives at a similar research lab in Strasbourg. Really, Clayton remains hopeful that the lab will find some way to continue its work. Since she could walk, she says, all she ever wanted to do was “dance and watch the birds”. It’s not easy to let go of what she has built here. As we stand in the aviary, listening to Hoy chirp, “What’s that noise?”, I ask her what it really means when a corvid mimics a human phrase, or a jay says, “I love you”. “Well,” says Clayton, “It’s their way of connecting, isn’t it?”
We are suffering through a pandemic of lies — or so we hear from leading voices in media, politics, and academia. Our culture is infected by a disease that has many names: fake news, post-truth, misinformation, disinformation, mal-information, anti-science. The affliction, we are told, is a perversion of the proper role of knowledge in a healthy information society.
What is to be done? To restore truth, we need strategies to “get the facts straight.” For example, we need better “science communication,” “independent fact-checking,” and a relentless commitment to exposing and countering falsehoods. This is why the Washington Post fastidiously counted 30,573 “false or misleading claims” by President Trump during his four years in office. Facebook, meanwhile, partners with eighty organizations worldwide to help it flag falsehoods and inform users of the facts. And some disinformation experts recently suggested in the New York Times that the Biden administration should appoint a “reality czar,” a central authority tasked with countering conspiracy theories about Covid and election fraud, who “could become the tip of the spear for the federal government’s response to the reality crisis.”
Such efforts reflect the view that untruth is a plague on our information society, one that can and must be cured. If we pay enough responsible, objective attention to distinguishing what is true from what is not, and thus excise misinformation from the body politic, people can be kept safe from falsehood. Put another way, it is an implicitly Edenic belief in the original purity of the information society, a state we have lapsed from but can yet return to, by the grace of fact-checkers.
We beg to differ. Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested.
This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us.
Truth, New and Improved
The emergence of widespread challenges to the control of information by mainstream social institutions developed in three phases.
First, new technologies of mass communication in the twentieth century — radio, television, and significant improvements in printing, further empowered by new social science methods — enabled the rise of mass-market advertising, which quickly became an essential tool for success in the marketplace. Philosophers like Max Horkheimer and Theodor Adorno were bewildered by a world where, thanks to these new forms of communication, unabashed lies in the interest of selling products could become not just an art but an industry.
The rise of mass marketing created the cultural substrate for the so-called post-truth world we live in now. It normalized the application of hyperbole, superlatives, and untestable claims of superiority to the rhetoric of everyday commerce. What started out as merely a way to sell new and improved soap powder and automobiles amounts today to a rhetorical infrastructure of hype that infects every corner of culture: the way people promote their careers, universities their reputations, governments their programs, and scientists the importance of their latest findings. Whether we’re listening to a food corporation claim that its oatmeal will keep your heart healthy or a university press office herald a new study that will upend everything we know, radical skepticism would seem to be the rational stance for information consumers.
Politics, Scientized
In a second, partly overlapping phase in the twentieth century, science underwent a massive expansion of its role into the domain of public affairs, and thus into highly contestable subject matters. Spurred by a wealth of new instruments for measuring the world and techniques for analyzing the resulting data, policies on agriculture, health, education, poverty, national security, the environment and much more became subject to new types of scientific investigation. As never before, science became part of the language of policymaking, and scientists became advocates for particular policies.
The dissolving boundary between science and politics was on full display by 1958, when the chemist Linus Pauling and physicist Edward Teller debated the risks of nuclear weapons testing on a U.S. television broadcast, a spectacle that mixed scientific claims about fallout risks with theories of international affairs and assertions of personal moral conviction. The debate presaged a radical transformation of science and its social role. Where science was once a rarefied, elite practice largely isolated from society, scientific experts were now mobilized in increasing numbers to form and inform politics and policymaking. Of course, society had long been shaped, sometimes profoundly, by scientific advances. But in the second half of the twentieth century, science programs started to take on a rapidly expanding portfolio of politically divisive issues: determining the cancer-causing potential of food additives, pesticides, and tobacco; devising strategies for the U.S. government in its nuclear arms race against the Soviet Union; informing guidelines for diet, nutrition, and education; predicting future energy supplies, food supplies, and population growth; designing urban renewal programs; choosing nuclear waste disposal sites; and on and on.
Philosopher-mathematicians Silvio Funtowicz and Jerome Ravetz recognized in 1993 that a new kind of science was emerging, which they termed “post-normal science.” This kind of science was inherently contestable, both because it dealt with the irreducible uncertainties of complex and messy problems at the intersection of nature and society, and because it was being used for making decisions that were themselves value-laden and contested. Questions that may sound straightforward, such as “Should women in their forties get regular mammograms?” or “Will genetically modified crops and livestock make food more affordable?” or “Do the benefits of decarbonizing our energy production outweigh the costs?” became the focus of intractable and never-ending scientific and political disputes.
This situation remained reasonably manageable through the 1990s, because science communication was still largely controlled by powerful institutions: governments, corporations, and universities. Even if these institutions were sometimes fiercely at odds, all had a shared interest in maintaining the idea of a unitary science that provided universal truths upon which rational action should be based. Debates between experts may have raged — often without end — but one could still defend the claim that the search for truth was a coherent activity carried out by special experts working in pertinent social institutions, and that the truths emerging from their work would be recognizable and agreed-upon when finally they were determined. Few questioned the fundamental notion that science was necessary and authoritative for determining good policy choices across a wide array of social concerns. The imperative remained to find facts that could inform action — a basic tenet of Enlightenment rationality.
Science, Democratized
The rise of the Internet and social media marks the third phase of the story, and it has now rendered thoroughly implausible any institutional monopoly on factual claims. As we are continuing to see with Covid, the public has instantly available to it a nearly inexhaustible supply of competing and contradictory claims, made by credentialed experts associated with august institutions, about everything from mask efficacy to appropriate social distancing and school closure policies. And many of the targeted consumers of these claims are already conditioned to be highly skeptical of the information they receive from mainstream media.
Today’s information environment certainly invites mischievous seeding of known lies into public discourse. But bad actors are not the most important part of the story. Institutions can no longer maintain their old stance of authoritative certainty about information — the stance they need to justify their actions, or to establish a convincing dividing line between true news and fake news. Claims of disinterest by experts acting on behalf of these institutions are no longer plausible. People are free to decide what information, and in which experts, they want to believe. The Covid lab-leak hypothesis was fake news until that news itself became fake. Fact-checking organizations are themselves now subject to accusations of bias: Recently, Facebook flagged as “false” a story in the esteemed British Medical Journal about a shoddy Covid vaccine trial, and the editors of the journal in turn called Facebook’s fact-checking “inaccurate, incompetent and irresponsible.”
No political system exists without its share of lies, obfuscation, and fake news, as Plato and Machiavelli taught. Yet even those thinkers would be puzzled by the immense power of modern technologies to generate stories. Ideas have become a battlefield, and we are all getting lost in the fog of the truth wars. When everything seems like it can be plausible to someone, the term “fake news” loses its meaning.
iStock
The celebrated expedient that an aristocracy has the right and the mission to offer “noble lies” to the citizens for their own good thus looks increasingly impotent. In October 2020, U.S. National Institutes of Health director Francis Collins, a veritable aristocrat of the scientific establishment, sought to delegitimize the recently released Great Barrington Declaration. Crafted by a group he referred to as “fringe epidemiologists” (they were from Harvard, Stanford, and Oxford), the declaration questioned the mainstream lockdown approach to the pandemic, including school and business closures. “There needs to be a quick and devastating published take down,” Collins wrote in an email to fellow aristocrat Anthony Fauci.
But we now live in a moment where suppressing that kind of dissent has become impossible. By May 2021, that “fringe” became part of a new think tank, the Brownstone Institute, founded in reaction to what they describe as “the global crisis created by policy responses to the Covid-19 pandemic.” From this perspective, policies advanced by Collins and Fauci amounted to “a failed experiment in full social and economic control” reflecting “a willingness on the part of the public and officials to relinquish freedom and fundamental human rights in the name of managing a public health crisis.” The Brownstone Institute’s website is a veritable one-stop Internet shopping haven for anyone looking for well-credentialed expert opinions that counter more mainstream expert opinions on Covid.
Similarly, claims that the science around climate change is “settled,” and that therefore the world must collectively work to decarbonize the global energy system by 2050, have engendered a counter-industry of dissenting experts, organizations, and websites.
At this point, one might be forgiven for speculating that the public is being fed such a heavy diet of Covid and climate change precisely because these are problems that have been framed politically as amenable to a scientific treatment. But it seems that the more the authoritiesinsist on the factiness of facts, the more suspect these become to larger and larger portions of the populace.
A Scientific Reformation
The introduction of the printing press in the mid-fifteenth century triggered a revolution in which the Church lost its monopoly on truth. Millions of books were printed in just a few decades after Gutenberg’s innovation. Some people held the printing press responsible for stoking collective economic manias and speculative bubbles. It allowed the widespread distribution of astrological almanacs in Europe, which fed popular hysteria around prophesies of impending doom. And it allowed dissemination of the Malleus Maleficarum, an influential treatise on demonology that contributed to rising persecution of witches.
Though the printing press allowed sanctioned ideas to spread like never before, it also allowed the spread of serious but hitherto suppressed ideas that threatened the legitimacy of the Church. A range of alternative philosophical, moral, and ideological perspectives on Christianity became newly accessible to ever-growing audiences. So did exposés of institutional corruption, such as the practice of indulgences — a market for buying one’s way out of purgatory that earned the Church vast amounts of money. Martin Luther, in particular, understood and exploited the power of the printing press in pursuing his attacks on the Church — one recent historical account, Andrew Pettegree’s book Brand Luther, portrays him as the first mass-market communicator.
“Beginning of the Reformation”: Martin Luther directs the posting of his Ninety-five Theses, protesting the practice of the sale of indulgences, to the door of the castle church in Wittenberg on October 31, 1517. W. Baron von Löwenstern, 1830 / Library of Congress
To a religious observer living through the beginning of the Reformation, the proliferation of printed material must have appeared unsettling and dangerous: the end of an era, and the beginning of a threatening period of heterodoxy, heresies, and confusion. A person exposed to the rapid, unchecked dispersion of printed matter in the fifteenth century might have called many such publications fake news. Today many would say that it was the Reformation itself that did away with fake news, with the false orthodoxies of a corrupted Church, opening up a competition over ideas that became the foundation of the modern world. Whatever the case, this new world was neither neat nor peaceful, with the religious wars resulting from the Church’s loss of authority over truth continuing until the mid-seventeenth century.
Like the printing press in the fifteenth century, the Internet in the twenty-first has radically transformed and disrupted conventional modes of communication, destroyed the existing structure of authority over truth claims, and opened the door to a period of intense and tumultuous change.
Those who lament the death of truth should instead acknowledge the end of a monopoly system. Science was the pillar of modernity, the new privileged lens to interpret the real world and show a pathway to collective good. Science was not just an ideal but the basis for a regime, a monopoly system. Within this regime, truth was legitimized in particular private and public institutions, especially government agencies, universities, and corporations; it was interpreted and communicated by particular leaders of the scientific community, such as government science advisors, Nobel Prize winners, and the heads of learned societies; it was translated for and delivered to the laity in a wide variety of public and political contexts; it was presumed to point directly toward right action; and it was fetishized by a culture that saw it as single and unitary, something that was delivered by science and could be divorced from the contexts in which it emerged.
Such unitary truths included above all the insistence that the advance of science and technology would guarantee progress and prosperity for everyone — not unlike how the Church’s salvific authority could guarantee a negotiated process for reducing one’s punishment for sins. To achieve this modern paradise, certain subsidiary truths lent support. One, for example, held that economic rationality would illuminate the path to universal betterment, driven by the principle of comparative advantage and the harmony of globalized free markets. Another subsidiary truth expressed the social cost of carbon emissions with absolute precision to the dollar per ton, with the accompanying requirement that humans must control the global climate to the tenth of a degree Celsius. These ideas are self-evidently political, requiring monopolistic control of truth to implement their imputed agendas.
An easy prophesy here is that wars over scientific truth will intensify, as did wars over religious truth after the printing press. Those wars ended with the Peace of Westphalia in 1648, followed, eventually, by the creation of a radically new system of governance, the nation-state, and the collapse of the central authority of the Catholic Church. Will the loss of science’s monopoly over truth lead to political chaos and even bloodshed? The answer largely depends upon the resilience of democratic institutions, and their ability to resist the authoritarian drift that seems to be a consequence of crises such as Covid and climate change, to which simple solutions, and simple truths, do not pertain.
Both the Church and the Protestants enthusiastically adopted the printing press. The Church tried to control it through an index of forbidden books. Protestant print shops adopted a more liberal cultural orientation, one that allowed for competition among diverse ideas about how to express and pursue faith. Today we see a similar dynamic. Mainstream, elite science institutions use the Internet to try to preserve their monopoly over which truths get followed where, but the Internet’s bottom-up, distributed architecture appears to give a decisive advantage to dissenters and their diverse ideologies and perspectives.
Holding on to the idea that science always draws clear boundaries between the true and the false will continue to appeal strongly to many sincere and concerned people. But if, as in the fifteenth century, we are now indeed experiencing a tumultuous transition to a new world of communication, what we may need is a different cultural orientation toward science and technology. The character of this new orientation is only now beginning to emerge, but it will above all have to accommodate the over-abundance of competing truths in human affairs, and create new opportunities for people to forge collective meaning as they seek to manage the complex crises of our day.
A landmark U.N. climate report on the escalating effects of global warming broke new ground by finally highlighting the role of misinformation in obstructing climate action. It was the first time one of the Intergovernmental Panel on Climate Change’s exhaustive assessments has called out the ways in which fossil fuel companies, climate deniers and conspiracy theorists have sown doubt and confusion about climate change and made it harder for policymakers to act.
The expert panel’s report released last week mostly focused on the increasing risk of catastrophe to nature and humanity from climate change. But it also laid out clear evidence of how misinformation about climate change and the “deliberate undermining of science” financed and organized by “vested economic and political interests,” along with deep partisanship and polarization, are delaying action to reduce greenhouse gas emissions and adapt to their impacts.
The assessment describes an atmosphere in which public perception about climate change is continually undermined by fossil fuel interests’ peddling of false, misleading and contrarian information and its circulation through social media echo chambers; where there’s an entrenched partisan divide on climate science and solutions; and people reject factual information if it conflicts with their political ideology.
Sound familiar? It should, because the climate misinformation landscape is worse in the United States than practically any other country.
While the section on misinformation covers only a few of the more than 3,600 pages in the report approved by 195 countries, it’s notable that it’s in a chapter about North America and calls out the U.S. as a hotbed for conspiracy theories, partisanship and polarization. A 2018 study of 25 countries that was cited in the IPCC report found that the U.S. had a stronger link between climate skepticism and conspiratorial and conservative ideology than in any other nation tested. These forces aren’t just a threat to democracy, they are major roadblocks to climate action and seem to have sharpened with the Trump presidency and the COVID-19 pandemic.
Misinformation was included in the North America chapter for the first time this year “because there has been a lot of research conducted on the topic since the last major IPCC report was published in 2014,” said Sherilee Harper, one of the lead authors and an associate professor at the University of Alberta in Canada. “Evidence assessed in the report shows how strong party affiliation and partisan opinion polarization can contribute to delayed climate action, most notably in the U.S.A., but also in Canada.”
The IPCC’s language is measured but leaves no doubt that the fossil fuel industry and politicians who advance its agenda are responsible. It is shameful that fossil fuel interests have been so successful in misleading Americans about the greatest threat to our existence. The industry has engaged in a decades-long campaign to question climate science and delay action, enlisting conservative think tanks and public relations firms to help sow doubt about global warming and the actions needed to fight it.
These dynamics help explain why U.S. politicians have failed time after time to enact significant federal climate legislation, including President Biden’s stalled but desperately needed “Build Back Better” bill that includes $555 billion to spur growth in renewable energy and clean transportation. And they show that combating disinformation is a necessity if we are to break through lawmakers’ refusal to act, which is increasingly out of step with Americans’ surging levels of alarm and concern about the overheating of the planet.
“We’ve seen misinformation poisoning the information landscape for over three decades, and over that time the public has been getting more and more polarized,” said John Cook, a postdoctoral research fellow at the Climate Change Communication Research Hub at Monash University in Australia. “The U.S. is the strongest source of misinformation and recipient of misinformation. It’s also the most polarized on climate.”
Cook and his colleagues studied misinformation on conservative think-tank websites and contrarian blogs over the last 20 years and charted the evolution of the climate opposition from outright denial of the reality of human-caused climate change and toward attacking solutions such as renewable energy or seeking to discredit scientists.
Cook said his research has found the most effective way to counter climate obstruction misinformation is to educate people about how to identify and understand different tactics, such as the use of fake experts, cherry-picked facts, logical fallacies and conspiracy theories. For example, seeing words such as “natural” or “renewable” in fossil fuel advertising raises red flags that you’re being misled through greenwashing.
“It’s like teaching people the magician’s sleight-of-hand trick,” Cook said.
There have been important efforts recently to hold the fossil fuel industry accountable for disinformation. In a hearing that was modeled on tobacco industry testimony from a generation ago, House Democrats hauled in oil executives last fall to answer to allegations that their companies have concealed their knowledge of the risks of global warming to obstruct climate action (they, unsurprisingly, denied them).
Perhaps we are getting closer to a turning point, where public realization that we’ve been misinformed by polluting industries begins to overcome decades of planet-endangering deceit and delay. Having the world’s scientists finally begin to call out the problem certainly can’t hurt.
A cena tem algo de surreal: pesquisador europeu com o corpo tomado por grafismos indígenas tem na cabeça um gorro com dezenas de eletrodos para eletroencefalografia (EEG). Um membro do povo Huni Kuin sopra rapé na narina do branco, que traz nas costas mochila com aparelhos portáteis para registrar suas ondas cerebrais.
A Expedition Neuron aconteceu em abril de 2019, em Santa Rosa do Purus (AC). No programa, uma tentativa de diminuir o fosso entre saberes tradicionais sobre uso da ayahuasca e a consagração do chá pelo chamado renascimento psicodélico para a ciência.
O resultado mais palpável da iniciativa, até aqui, apareceu num controverso texto sobre ética, e não dados, de pesquisa.
Os autores Eduardo Ekman Schenberg, do Instituto Phaneros, e Konstantin Gerber, da PUC-SP, questionam a autoridade da ciência com base na dificuldade de empregar placebo em experimentos com psicodélicos, na ênfase dada a aspectos moleculares e no mal avaliado peso do contexto (setting) para a segurança do uso, quesito em que cientistas teriam muito a aprender com indígenas.
Entre os alvos das críticas figuram pesquisas empreendidas na última década pelos grupos de Jaime Hallak na USP de Ribeirão Preto e de Dráulio de Araújo no Instituto do Cérebro da UFRN, em particular sobre efeito da ayahuasca na depressão. Procurados, cientistas e colaboradores desses grupos não responderam ou preferiram não se pronunciar.
O potencial antidepressivo da dimetiltriptamina (DMT), principal composto psicoativo do chá, está no foco também de pesquisadores de outros países. Mas outras substâncias psicodélicas, como MDMA e psilocibina, estão mais próximas de obter reconhecimento de reguladores como medicamentos psiquiátricos.
Dado o efeito óbvio de substâncias como a ayahuasca na mente e no comportamento da pessoa, argumentam Schenberg e Gerber, o sistema duplo-cego (padrão ouro de ensaios biomédicos) ficaria inviabilizado: tanto o voluntário quanto o experimentador quase sempre sabem se o primeiro tomou um composto ativo ou não. Isso aniquilaria o valor supremo atribuído a estudos desse tipo no campo psicodélico e na biomedicina em geral.
Outro ponto criticado por eles está na descontextualização e no reducionismo de experimentos realizados em hospitais ou laboratórios, com o paciente cercado de aparelhos e submetido a doses fixadas em miligramas por quilo de peso. A precisão é ilusória, afirmam, com base no erro de um artigo que cita concentração de 0,8 mg/ml de DMT e depois fala em 0,08 mg/ml.
A sanitização cultural do setting, por seu lado, faria pouco caso dos elementos contextuais (floresta, cânticos, cosmologia, rapé, danças, xamãs) que para povos como os Huni Kuin são indissociáveis do que a ayahuasca tem a oferecer e ensinar. Ao ignorá-los, cientistas estariam desprezando tudo o que os indígenas sabem sobre uso seguro e coletivo da substância.
Mais ainda, estariam ao mesmo tempo se apropriando e desrespeitando esse conhecimento tradicional. Uma atitude mais ética de pesquisadores implicaria reconhecer essa contribuição, desenvolver protocolos de pesquisa com participação indígena, registrar coautoria em publicações científicas, reconhecer propriedade intelectual e repartir eventuais lucros com tratamentos e patentes.
“A complementaridade entre antropologia, psicanálise e psiquiatria é um dos desafios da etnopsiquiatria”, escrevem Schenberg e Gerber. “A iniciativa de levar ciência biomédica à floresta pode ser criticada como uma tentativa de medicalizar o xamanismo, mas também pode constituir uma possibilidade de diálogo intercultural centrado na inovação e na resolução de ‘redes de problemas’.”
“É particularmente notável que a biomedicina se aventure agora em conceitos como ‘conexão’ e ‘identificação com a natureza’ [nature-relatedness] como efeito de psicodélicos, mais uma vez, portanto, se aproximando de conclusões epistêmicas derivadas de práticas xamânicas. O desafio final seria, assim, entender a relação entre bem-estar da comunidade e ecologia e como isso pode ser traduzido num conceito ocidental de saúde integrada.”
As reações dos poucos a criticar abertamente o texto e suas ideias grandiosas podem ser resumidas num velho dito maldoso da academia: há coisas boas e novas no artigo, mas as coisas boas não são novas e as coisas novas não são boas. Levar EEG para a floresta do Acre, por exemplo, não resolveria todos os problemas.
Schenberg é o elo de ligação entre o artigo na Transcultural Psychiatry e a Expedition Neuron, pois integrou a incursão ao Acre em 2019 e colabora nesse estudo de EEG com o pesquisador Tomas Palenicek, do Instituto Nacional de Saúde Mental da República Checa. Eis um vídeo de apresentação, em inglês:
“Estamos engajados, Konstantin e eu, em projeto inovador com os Huni Kuin e pesquisadores europeus, buscando construir uma parceria epistemicamente justa, há mais de três anos”, respondeu Schenberg quando questionado sobre o cumprimento, pelo estudo com EEG, das exigências éticas apresentadas no artigo.
Na apresentação da Expedition Neuron, ele afirma: “Nessa primeira expedição curta e exploratória [de 2019], confirmamos que há interesse mútuo de cientistas e uma cultura indígena tradicional da Amazônia em explorar conjuntamente a natureza da consciência e como sua cura tradicional funciona, incluindo –pela primeira vez– registros de atividade cerebral num cenário que muitos considerariam demasiado desafiador tecnicamente”.
“Consideramos de supremo valor investigar conjuntamente como os rituais e medicinas dos Huni Kuin afetam a cognição humana, as emoções e os vínculos de grupo e analisar a base neural desses estados alterados de consciência, incluindo possivelmente experiências místicas na floresta.”
Schenberg e seus colaboradores planejam nova expedição aos Huni Kuin para promover registros de EEG múltiplos e simultâneos com até sete indígenas durante cerimônias com ayahuasca. A ideia é testar a “possibilidade muito intrigante” de sincronia entre cérebros:
“Interpretada pelos Huni Kuin e outros povos ameríndios como um tipo de portal para o mundo espiritual, a ayahuasca é conhecida por fortalecer intensa e rapidamente vínculos comunitários e sentimentos de empatia e proximidade com os outros.”
Os propósitos de Schenberg e Gerber não convenceram a antropóloga brasileira Bia Labate, diretora do Instituto Chacruna em São Francisco (EUA). “Indígenas não parecem ter sido consultados para a produção do texto, não há vozes nativas, não são coautores, e não temos propostas específicas do que seria uma pesquisa verdadeiramente interétnica e intercultural.”
Para a antropóloga, ainda que a Expedition Neuron tenha conseguido autorização para a pesquisa, algo positivo, não configura “epistemologia alternativa à abordagem cientificista e etnocêntrica”. Uma pesquisa interétnica, em sua maneira de ver, implicaria promover uma etnografia que levasse a sério a noção indígena de que plantas são espíritos, têm agência própria, e que o mundo natural também é cultural, tem subjetividade, intencionalidade.
“Todos sabemos que a bebida ayahuasca não é a mesma coisa que ayahuasca freeze dried [liofilizada]; que o contexto importa; que os rituais e coletivos que participam fazem diferença. Coisas iguais ou análogas já haviam sido apontadas pela literatura antropológica, cujas referências foram deixadas de lado pelos autores.”
Labate também discorda de que os estudos com ayahuasca no Brasil negligenciem o reconhecimento de quem chegou antes a ela: “Do ponto de vista global, é justamente uma marca e um diferencial da pesquisa científica brasileira o fato de que houve, sim, diálogo com participantes das religiões ayahuasqueiras. Estes também são sujeitos legítimos de pesquisa, e não apenas os povos originários”.
Schenberg e Palenicek participaram em 2020 de um encontro com outra antropóloga, a franco-colombiana Emilia Sanabria, líder no projeto Encontros de Cura, do Conselho Nacional de Pesquisa Científica da França (CNRS). Ao lado do indígena Leopardo Yawa Bane, o trio debateu o estudo com EEG no painel virtual “Levando o Laboratório até a Ayahuasca”, da Conferência Interdisciplinar sobre Pesquisa Psicodélica (ICPR). Há vídeo disponível, em inglês:
Sanabria, que fala português e conhece os Huni Kuin, chegou a ser convidada por Schenberg para integrar a expedição, mas declinou, por avaliar que não se resolveria a “incomensurabilidade epistemológica” entre o pensamento indígena e o que a biomedicina quer provar. Entende que a discussão proposta na Transcultural Psychiatry é importante, apesar de complexa e não exatamente nova.
Em entrevista ao blog, afirmou que o artigo parece reinventar a roda, ao desconsiderar um longo debate sobre a assimilação de plantas e práticas tradicionais (como a medicina chinesa) pela ciência ocidental: “Não citam a reflexão anterior. É bom que ponham a discussão na mesa, mas há bibliografia de mais de um século”.
A antropóloga declarou ver problema na postura do artigo, ao apresentar-se como salvador dos nativos. “Não tem interlocutor indígena citado como autor”, pondera, corroborando a crítica de Labate, como se os povos originários precisassem ser representados por não índios. “A gente te dá um espacinho aqui no nosso mundo.”
A questão central de uma colaboração respeitosa, para Sanabria, é haver prioridade e utilidade no estudo também para os Huni Kuin, e não só para os cientistas.
Ao apresentar esse questionamento no painel, recebeu respostas genéricas de Schenberg e Palenicek, não direta e concretamente benéficas para os Huni Kuin –por exemplo, que a ciência pode ajudar na rejeição de patentes sobre ayahuasca.
Na visão da antropóloga, “é linda a ideia de levar o laboratório para condições naturalistas”, mas não fica claro como aquela maquinaria toda se enquadraria na lógica indígena. No fundo, trata-se de um argumento simétrico ao brandido pelos autores do artigo contra a pesquisa psicodélica em ambiente hospitalar: num caso se descontextualiza a experiência psicodélica total, socializada; no outro, é a descontextualização tecnológica que viaja e invade a aldeia.
Sanabria vê um dilema quase insolúvel, para povos indígenas, na pactuação de protocolos de pesquisa com a renascida ciência psicodélica. O que em 2014 parecia para muitos uma nova maneira de fazer ciência, com outros referenciais de avaliação e prova, sofreu uma “virada capitalista” desde 2018 e terminou dominado pela lógica bioquímica e de propriedade intelectual.
“Os povos indígenas não podem cair fora porque perdem seus direitos. Mas também não podem entrar [nessa lógica], porque aí perdem sua perspectiva identitária.”
“Molecularizar na floresta ou no laboratório dá no mesmo”, diz Sanabria. “Não vejo como reparação de qualquer injustiça epistêmica. Não vejo diferença radical entre essa pesquisa e o estudo da Fernanda [Palhano-Fontes]”, referindo-se à crítica “agressiva” de Schenberg e Gerber ao teste clínico de ayahuasca para depressão no Instituto do Cérebro da UFRN, extensiva aos trabalhos da USP de Ribeirão Preto.
A dupla destacou, por exemplo, o fato de autores do estudo da UFRN indicarem no artigo de 2019 que 4 dos 29 voluntários no experimento ficaram pelo menos uma semana internados no Hospital Universitário Onofre Lopes, em Natal. Lançaram, com isso, a suspeita de que a segurança na administração de ayahuasca tivesse sido inadequadamente tratada.
“Nenhum desses estudos tentou formalmente comparar a segurança no ambiente de laboratório com qualquer um dos contextos culturais em que ayahuasca é comumente usada”, pontificaram Schenberg e Gerber. “Porém, segundo nosso melhor conhecimento, nunca se relatou que 14% das pessoas participantes de um ritual de ayahuasca tenham requerido uma semana de hospitalização.”
O motivo de internação, contudo, foi trivial: pacientes portadores de depressão resistente a medicamentos convencionais, eles já estavam hospitalizados devido à gravidade de seu transtorno mental e permaneceram internados após a intervenção. Ou seja, a internação não teve a ver com terem tomado ayahuasca.
Este blog também questionou Schenberg sobre o possível exagero em pinçar um erro que poderia ser de digitação (0,8 mg/ml ou 0,08 mg/ml), no artigo de 2015 da USP de Ribeirão, como flagrante de imprecisão que poria em dúvida a superioridade epistêmica da biomedicina psicodélica.
“Se dessem mais atenção aos relatos dos voluntários/pacientes, talvez tivessem se percebido do fato”, retorquiu o pesquisador do Instituto Phaneros. “Além da injustiça epistêmica com os indígenas, existe a injustiça epistêmica com os voluntários/pacientes, que também discutimos brevemente no artigo.”
Schenberg tem vários trabalhos publicados que se encaixariam no paradigma biomédico agora em sua mira. Seria seu artigo com Gerber uma autocrítica sobre a atividade pregressa?
“Sempre fui crítico de certas limitações biomédicas e foi somente com muito esforço que consegui fazer meu pós-doc sem, por exemplo, usar um grupo placebo, apesar de a maioria dos colegas insistirem que assim eu deveria fazer, caso contrário ‘não seria científico’…”.
“No fundo, o argumento é circular, usando a biomedicina como critério último para dar respostas à crítica à biomedicina”, contesta Bia Labate. “O texto não resolve o que se propõe a resolver, mas aprofunda o gap [desvão] entre epistemologias originárias e biomédicas ao advogar por novas maneiras de produzir biomedicina a partir de critérios de validação… biomédicos.”
Pew Research Center conducted this study to understand how much confidence Americans have in groups and institutions in society, including scientists and medical scientists. We surveyed 14,497 U.S. adults from Nov. 30 to Dec. 12, 2021.
The survey was conducted on Pew Research Center’s American Trends Panel (ATP) and included an oversample of Black and Hispanic adults from the Ipsos KnowledgePanel. A total of 3,042 Black adults (single-race, not Hispanic) and 3,716 Hispanic adults were sampled.
Respondents on both panels are recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology.
This is made possible by The Pew Charitable Trusts, which received support from Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation.
Pew Research Center conducted this study to understand how much confidence Americans have in groups and institutions in society, including scientists and medical scientists. We surveyed 14,497 U.S. adults from Nov. 30 to Dec. 12, 2021.
The survey was conducted on Pew Research Center’s American Trends Panel (ATP) and included an oversample of Black and Hispanic adults from the Ipsos KnowledgePanel. A total of 3,042 Black adults (single-race, not Hispanic) and 3,716 Hispanic adults were sampled.
Respondents on both panels are recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology.
This is made possible by The Pew Charitable Trusts, which received support from Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation.
Americans’ confidence in groups and institutions has turned downward compared with just a year ago. Trust in scientists and medical scientists, once seemingly buoyed by their central role in addressing the coronavirus outbreak, is now below pre-pandemic levels.
Overall, 29% of U.S. adults say they have a great deal of confidence in medical scientists to act in the best interests of the public, down from 40% who said this in November 2020. Similarly, the share with a great deal of confidence in scientists to act in the public’s best interests is down by 10 percentage points (from 39% to 29%), according to a new Pew Research Center survey.
The new findings represent a shift in the recent trajectory of attitudes toward medical scientists and scientists. Public confidence in both groups had increased shortly after the start of the coronavirus outbreak, according to an April 2020 survey. Current ratings of medical scientists and scientists have now fallen below where they were in January 2019, before the emergence of the coronavirus.
Scientists and medical scientists are not the only groups and institutions to see their confidence ratings decline in the last year. The share of Americans who say they have a great deal of confidence in the military to act in the public’s best interests has fallen 14 points, from 39% in November 2020 to 25% in the current survey. And the shares of Americans with a great deal of confidence in K-12 public school principals and police officers have also decreased (by 7 and 6 points, respectively).
Large majorities of Americans continue to have at least a fair amount of confidence in medical scientists (78%) and scientists (77%) to act in the public’s best interests. These ratings place them at the top of the list of nine groups and institutions included in the survey. A large majority of Americans (74%) also express at least a fair amount of confidence in the military to act in the public’s best interests. Roughly two-thirds say this about police officers (69%) and K-12 public school principals (64%), while 55% have at least a fair amount of confidence in religious leaders.
The public continues to express lower levels of confidence in journalists, business leaders and elected officials, though even for these groups, public confidence is tilting more negative. Four-in-ten say they have a great deal or a fair amount of confidence in journalists and business leaders to act in the public’s best interests; six-in-ten now say they have not too much or no confidence at all in these groups. Ratings for elected officials are especially negative: 24% say they have a great deal or fair amount of confidence in elected officials, compared with 76% who say they have not too much or no confidence in them.
The survey was fielded Nov. 30 through Dec. 12, 2021, among 14,497 U.S. adults, as the omicron variant of the coronavirus was first detected in the United States – nearly two years since the coronavirus outbreak took hold. Recent surveys this year have found declining ratings for how President Joe Biden has handled the coronavirus outbreak as well as lower ratings for his job performance – and that of Congress – generally.
Partisan differences over trust in medical scientists, scientists continue to widen since the coronavirus outbreak
Democrats remain more likely than Republicans to express confidence in medical scientists and scientists to act in the public’s best interests.
However, there has been a significant decline in public confidence in medical scientists and scientists among both partisan groups.
Among Democrats and Democratic-leaning independents, nine-in-ten express either a great deal (44%) or a fair amount (46%) of confidence in medical scientists to act in the public’s best interests. However, the share expressing strong confidence in medical scientists has fallen 10 points since November 2020.
There has been a similar decline in the share of Democrats holding the strongest level of confidence in scientists since November 2020. (Half of the survey respondents were asked about their confidence in “medical scientists,” while the other half were asked about “scientists.”)
Still, ratings for medical scientists, along with those for scientists, remain more positive than those for other groups in the eyes of Democrats and independents who lean to the Democratic Party. None of the other groups rated on the survey garner as much confidence; the closest contenders are public school principals and the military. About three-quarters (76%) of Democrats and Democratic leaners have at least a fair amount of confidence in public school principals; 68% say the same about the military.
There has been a steady decline in confidence in medical scientists among Republicans and Republican leaners since April 2020. In the latest survey, just 15% have a great deal of confidence in medical scientists, down from 31% who said this in April 2020 and 26% who said this in November 2020. There has been a parallel increase in the share of Republicans holding negative views of medical scientists, with 34% now saying they have not too much or no confidence at all in medical scientists to act in the public’s best interests – nearly three times higher than in January 2019, before the coronavirus outbreak.
Republicans’ views of scientists have followed a similar trajectory. Just 13% have a great deal of confidence in scientists, down from a high of 27% in January 2019 and April 2020. The share with negative views has doubled over this time period; 36% say they have not too much or no confidence at all in scientists in the latest survey.
Republicans’ confidence in other groups and institutions has also declined since the pandemic took hold. The share of Republicans with at least a fair amount of confidence in public school principals is down 27 points since April 2020. Views of elected officials, already at low levels, declined further; 15% of Republicans have at least a fair amount of confidence in elected officials to act in the public’s best interests, down from 37% in April 2020.
Race and ethnicity, education, partisan affiliation each shape confidence in medical scientists
People’s assessments of scientists and medical scientists are tied to several factors, including race and ethnicity as well as levels of education and partisan affiliation.
Looking across racial and ethnic groups, confidence in medical scientists declined at least modestly among White and Black adults over the past year. The decline was especially pronounced among White adults.
There is now little difference between how White, Black and Hispanic adults see medical scientists. This marks a shift from previous Pew Research Center surveys, where White adults were more likely than Black adults to express high levels of confidence in medical scientists.
Among White adults, the share with a great deal of confidence in medical scientists to act in the best interests of the public has declined from 43% to 29% over the past year. Ratings are now lower than they were in January 2019, before the coronavirus outbreak in the U.S.
Among Black adults, 28% say they have a great deal of confidence in medical scientists to act in the public’s best interests, down slightly from November 2020 (33%).
The share of Hispanic adults with a strong level of trust in medical scientists is similar to the share who expressed the same level of trust in November 2020, although the current share is 16 points lower than it was in April 2020 (29% vs 45%), shortly after measures to address the coronavirus outbreak began. Ratings of medical scientists among Hispanic adults continue to be lower than they were before the coronavirus outbreak. In January 2019, 37% of Hispanic adults said they had a great deal of confidence in medical scientists.
While the shares of White, Black and Hispanic adults who express a great deal of confidence in medical scientists have declined since the early stages of the coronavirus outbreak in the U.S., majorities of these groups continue to express at least a fair amount of confidence in medical scientists, and the ratings for medical scientists compare favorably with those of other groups and institutions rated in the survey.
Confidence in scientists tends to track closely with confidence in medical scientists. Majorities of White, Black and Hispanic adults have at least a fair amount of confidence in scientists. And the shares with this view continue to rank at or above those for other groups and institutions. For more on confidence in scientists over time among White, Black and Hispanic adults, see the Appendix.
Confidence in medical scientists and scientists across racial and ethnic groups plays out differently for Democrats and Republicans.
White Democrats (52%) are more likely than Hispanic (36%) and Black (30%) Democrats to say they have a great deal of confidence in medical scientists to act in the public’s best interests. However, large majorities of all three groups say they have at least a fair amount of confidence in medical scientists.
Among Republicans and Republican leaners, 14% of White adults say they have a great deal of confidence in medical scientists, while 52% say they have a fair amount of confidence. Views among Hispanic Republicans are very similar to those of White Republicans, in contrast to differences seen among Democrats.
There are similar patterns in confidence in scientists. (However, the sample size for Black Republicans in the survey is too small to analyze on these measures.) See the Appendix for more.
Americans with higher levels of education express more positive views of scientists and medical scientists than those with lower levels of education, as has also been the case in past Center surveys. But education matters more in assessments by Democrats than Republicans.
Democrats and Democratic leaners with at least a college degree express a high level of confidence in medical scientists: 54% have a great deal of confidence and 95% have at least a fair amount of confidence in medical scientists to act in the public’s interests. By comparison, a smaller share of Democrats who have not graduated from college have confidence in medical scientists.
Among Republicans and Republican leaners, college graduates are 9 points more likely than those with some college experience or less education to express a great deal of confidence in medical scientists (21% vs. 12%).
There is a similar difference between those with higher and lower education levels among Democrats when it comes to confidence in scientists. Among Republicans, differences by education are less pronounced; there is no significant difference by education level in the shares holding the strongest level of confidence in scientists to act in the public’s interests. See the Appendix for details.
Dominique David-Chavez works with Randal Alicea, an Indigenous farmer, in his tobacco-drying shed in Cidra, Borikén (Puerto Rico).Credit: Norma Ortiz
Many scientists rely on Indigenous people to guide their work — by helping them to find wildlife, navigate rugged terrain or understand changing weather trends, for example. But these relationships have often felt colonial, extractive and unequal. Researchers drop into communities, gather data and leave — never contacting the locals again, and excluding them from the publication process.
Today, many scientists acknowledge the troubling attitudes that have long plagued research projects in Indigenous communities. But finding a path to better relationships has proved challenging. Tensions surfaced last year, for example, when seven University of Auckland academics argued that planned changes to New Zealand’s secondary school curriculum, to “ensure parity between mātauranga Māori”, or Maori knowledge, and “other bodies of knowledge”, could undermine trust in science.
Last month, the University of Auckland’s vice-chancellor, Dawn Freshwater, announced a symposium to be held early this year, at which different viewpoints can be discussed. In 2016, the US National Science Foundation (NSF) launched Navigating the New Arctic — a programme that encouraged scientists to explore the wide-reaching consequences of climate change in the north. A key sentence in the programme description reflected a shift in perspective: “Given the deep knowledge held by local and Indigenous residents in the Arctic, NSF encourages scientists and Arctic residents to collaborate on Arctic research projects.” The Natural Sciences and Engineering Research Council of Canada and New Zealand’s Ministry of Business, Innovation and Employment have made similar statements. So, too, have the United Nations cultural organization UNESCO and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services.
But some Indigenous groups feel that despite such well-intentioned initiatives, their inclusion in research is only a token gesture to satisfy a funding agency.
There’s no road map out of science’s painful past. Nature asked three researchers who belong to Indigenous communities in the Americas and New Zealand, plus two funders who work closely with Northern Indigenous communities, how far we’ve come toward decolonizing science — and how researchers can work more respectfully with Indigenous groups.
DANIEL HIKUROA: Weave folklore into modern science
Daniel Hikuroa is an Earth systems and environmental humanities researcher at Te Wānanga o Waipapa, University of Auckland, New Zealand, and a member of the Māori community.
We all have a world view. Pūrākau, or traditional stories, are a part of Māori culture with great potential for informing science. But what you need to understand is that they’re codified according to an Indigenous world view.
For example, in Māori tradition, we have these things called taniwha that are like water serpents. When you think of taniwha, you think, danger, risk, be on your guard! Taniwha as physical entities do not exist. Taniwha are a mechanism for describing how rivers behave and change through time. For example, pūrākau say that taniwha live in a certain part of the Waikato River, New Zealand’s longest, running for 425 kilometres through the North Island. That’s the part of the river that tends to flood. Fortunately, officials took knowledge of taniwha into account when they were designing a road near the Waikato river in 2002. Because of this, we’ve averted disasters.
Sometimes, it takes a bit of explanation to convince non-Indigenous scientists that pūrākau are a variation on the scientific method. They’re built on observations and interpretations of the natural world, and they allow us to predict how the world will function in the future. They’re repeatable, reliable, they have rigour, and they’re accurate. Once scientists see this, they have that ‘Aha!’ moment where they realize how well Western science and pūrākau complement each other.
We’re very lucky in New Zealand because our funding agencies help us to disseminate this idea. In 2005, the Ministry of Research, Science and Technology (which has since been incorporated into the Ministry of Business, Innovation and Employment) developed a framework called Vision Mātauranga. Mātauranga is the Māori word for knowledge, but it also includes the culture, values and world view of Māori people. Whenever a scientist applies for funding, they’re asked whether their proposal addresses a Māori need or can draw on Māori knowledge. The intent of Vision Mātauranga is to broaden the science sector by unlocking the potential of Māori mātauranga.
In the early days of Vision Mātauranga, some Indigenous groups found themselves inundated with last-minute requests from researchers who just wanted Indigenous people to sign off on their proposals to make their grant applications more competitive. It was enormously frustrating. These days, most researchers are using the policy with a higher degree of sophistication.
Vision Mātauranga is at its best when researchers develop long-term relationships with Indigenous groups so that they know about those groups’ dreams and aspirations and challenges, and also about their skill sets. Then the conversation can coalesce around where those things overlap with the researchers’ own goals. The University of Waikato in Hamilton has done a great job with this, establishing a chief-to-chief relationship in which the university’s senior management meets maybe twice a year with the chiefs of the Indigenous groups in the surrounding area. This ongoing relationship lets the university and the Indigenous groups have high-level discussions that build trust and can inform projects led by individual labs.
We’ve made great progress towards bridging Māori culture and scientific culture, but attitudes are still evolving — including my own. In 2011, I published my first foray into using Māori knowledge in science, and I used the word ‘integrate’ to describe the process of combining the two. I no longer use that word, because I think weaving is a more apt description. When you weave two strands together, the integrity of the individual components can remain, but you end up with something that’s ultimately stronger than what you started with.
DOMINIQUE DAVID-CHAVEZ: Listen and learn with humility
Dominique David-Chavez is an Indigenous land and data stewardship researcher at Colorado State University in Fort Collins, and a member of the Arawak Taíno community.
People often ask how can we integrate Indigenous knowledge into Western science. But framing the question in this way upholds the unhealthy power dynamic between Western and Indigenous scientists. It makes it sound as though there are two singular bodies of knowledge, when in fact Indigenous knowledge — unlike Western science — is drawn from thousands of different communities, each with its own knowledge systems.
At school, I was taught this myth that it was European and American white men who discovered all these different physical systems on Earth — on land, in the skies and in the water. But Indigenous people have been observing those same systems for hundreds or thousands of years. When Western scientists claim credit for discoveries that Indigenous people made first, they’re stealing Indigenous people’s contributions to science. This theft made me angry, but it also drove me. I decided to undertake graduate studies so that I could look critically at how we validate who creates knowledge, who creates science and who are the scientists.
To avoid perpetuating harmful power dynamics, researchers who want to work in an Indigenous people’s homeland should first introduce themselves to the community, explain their skills and convey how their research could serve the community. And they should begin the work only if the community invites them to. That invitation might take time to come! The researchers should also build in time to spend in the community to listen, be humbled and learn.
If you don’t have that built-in relational accountability, then maybe you’re better off in a supporting role.
Overall, my advice to Western researchers is this: always be questioning your assumptions about where science came from, where it’s going and what part you should be playing in its development.
MARY TURNIPSEED: Fund relationship building and follow-ups
Mary Turnipseed is an ecologist and grantmaker at the Gordon and Betty Moore Foundation, Palo Alto, California.
I’ve been awarding grants in the Arctic since 2015, when I became a marine-conservation programme officer at the Gordon and Betty Moore Foundation. A lesson I learnt early on about knowledge co-production — the term used for collaborations between academics and non-academics — is to listen. In the non-Indigenous parts of North America, we’re used to talking, but flipping that on its end helps us to work better with Indigenous communities.
Listening to our Indigenous Alaskan Native partners is often how I know whether a collaboration is working well or not. If the community is supportive of a particular effort, that means they’ve been able to develop a healthy relationship with the researchers. We have quarterly check-ins with our partners about how projects are going; and, in non-pandemic times, I frequently travelled to Alaska to talk directly with our partners.
One way in which we help to spur productive relationships is by giving research teams a year of preliminary funding — before they even start their research — so that they can work with Indigenous groups to identify the questions their research will address and decide how they’re going to tackle them. We really need more funding agencies to set aside money for this type of early relationship-building, so that everyone goes into a project with the same expectations, and with a level of trust for one another.
Members of the Ikaaġvik Sikukun collaboration at the Native Village of Kotzebue, Alaska.Credit: Sarah Betcher/Farthest North Films
Developing relationships takes time, so it’s easiest when Indigenous communities have a research coordinator, such as Alex Whiting (environmental programme director for the Native Village of Kotzebue), to handle all their collaborations. I think the number of such positions could easily be increased tenfold, and I’d love to see the US federal government offer more funding for these types of position.
Funding agencies should provide incentives for researchers to go back to the communities that they’ve worked with and share what they’ve found. There’s always talk among Indigenous groups about researchers who come in, collect data, get their PhDs and never show up again. Every time that happens, it hurts the community, and it hurts the next researchers to come. I think it’s essential for funding agencies to prevent this from happening.
ALEX WHITING: Develop a toolkit to decolonize relationships
Alex Whiting is an environmental specialist in Kotzebue, Alaska, and a formally adopted member of the Qikiktagrukmiut community.
A lot of the time, researchers who operate in a colonial way aren’t aware of the harm they’re doing. But many people are realizing that taking knowledge without involving local people is not only unethical, but inefficient. In 1997, the Native Village of Kotzebue — a federally recognized seat of tribal government representing the Qikiktagrukmiut, northwest Alaska’s original inhabitants — hired me as its environmental programme director. I helped the community to develop a research protocol that lays out our expectations of scientists who work in our community, and an accompanying questionnaire.
By filling in the one-page questionnaire, researchers give us a quick overview of what they plan to do; its relevance and potential benefit to our community; the need for local involvement; and how we’ll be compensated financially. This provides us with a tool through which to develop relationships with researchers, make sure that our priorities and rights are addressed, and hold researchers accountable. Making scientists think about how they’ll engage with us has helped to make research a more equitable, less extractive activity.
We cannot force scientists to deal with us. It’s a free country. But the Qikiktagrukmiut are skilled at activities such as boating, travelling on snow and capturing animals — and those skills are extremely useful for fieldwork, as is our deep historical knowledge of the local environment. It’s a lot harder for scientists to accomplish their work without our involvement. Many scientists realize this, so these days we get 6–12 research proposals per year. We say yes to most of them.
The NSF’s Navigating the New Arctic programme has definitely increased the number of last-minute proposals that communities such as ours get swamped with a couple of weeks before the application deadline. Throwing an Indigenous component into a research proposal at the last minute is definitely not an ideal way to go about things, because it doesn’t give us time to fully consider the research before deciding whether we want to participate. But at least the NSF has recognized that working with Indigenous people is a thing! They’re just in the growing-pains phase.
Not all Indigenous groups have had as much success as we have, and some are still experiencing the extractive side of science. But incorporating Indigenous knowledge into science can create rapid growths in understanding, and we’re happy we’ve helped some researchers do this in a respectful way.
NATAN OBED: Fund research on Indigenous priorities
Natan Obed is president of Inuit Tapiriit Kanatami, and a member of the Inuit community.
Every year, funding agencies devote hundreds of millions of dollars to work that occurs in the Inuit homeland in northern Canada. Until very recently, almost none of those agencies considered Inuit peoples’ priorities.
These Indigenous communities face massive social and economic challenges. More than 60% of Inuit households are food insecure, meaning they don’t always have enough food to maintain an active, healthy life. On average, one-quarter as many doctors serve Inuit communities as serve urban Canadian communities. Our life expectancy is ten years less than the average non-Indigenous Canadian’s. The list goes on. And yet, very little research is devoted to addressing these inequities.
Last year, the Inuit advocacy organization Inuit Tapiriit Kanatami (the name means ‘Inuit are united in Canada’) collaborated with the research network ArcticNet to start its own funding programme, which is called the Inuit Nunangat Research Program (INRP). Funding decisions are led entirely by Inuit people to ensure that all grants support research on Inuit priorities. Even in the programme’s first year, we got more requests than we could fund. We selected 11 proposals that all relate directly to the day-to-day lives of Inuit people. For example, one study that we’re funding aims to characterize a type of goose that has newly arrived in northern Labrador; another focuses on how social interactions spread disease in Inuit communities.
Our goal with the INRP is twofold: first, we want to generate knowledge that addresses Inuit concerns, and second, we want to create an example of how other granting agencies can change so that they respect the priorities of all groups. We’ve been moderately successful in getting some of the main Canadian granting agencies, such as the Canadian Institutes of Health Research, to allocate more resources to things that matter to Inuit people. I’d like to think that the INRP gives them a model for how to become even more inclusive.
We hope that, over the next ten years, it will become normal for granting agencies to consider the needs of Indigenous communities. But we also know that institutions change slowly. Looking back at where we’ve been, we have a lot to be proud of, but we still have a huge task ahead of us.
These interviews have been edited for length and clarity.
Lisa Loseto stands by a campfire.Credit: Oksana Schimnowski
Lisa Loseto is a research scientist at Fisheries and Oceans Canada, a federal government department whose regional offices include one in Winnipeg, where she is based. Some of Northern Canada’s Indigenous people have shaped her research into how beluga whales (Delphinapterus leucas) interact with their environments, and have taught her to rethink her own part in the scientific method. As co-editor-in-chief of the journal Arctic Science since 2017, she is looking at ways to increase Indigenous representation in scientific publishing, including the editorial and peer-review processes.
What got you thinking about the role of Indigenous people in scientific publishing?
In 2020, Arctic Science published a special issue centred on knowledge co-produced by Western scientists and Indigenous people. As production of that issue progressed, the peer-review and editorial processes stuck out as aspects lacking Indigenous representation. We were soliciting papers to highlight the contributions of Indigenous knowledge — but the peer-review process was led by non-Indigenous editors like myself, and academics to review the articles. A few members of the editorial board thought, ‘Let’s talk about this and think about ways to provide more balance.’ We discussed the issue in a workshop that included representatives from several groups that are indigenous to Canada’s Arctic.
What did the workshop reveal about the Indigenous participants’ perceptions of scientific publishing?
For a lot of people, publishing seemed like a distant concept, so we explained how the editorial and peer-review processes work. We described peer review as a method for validating knowledge before it’s published, and many Indigenous participants recognized similarities between that process and one in their own lives: in the Arctic, each generation passes down knowledge of how to live in a harsh environment, and over time this knowledge is tested and refined. The Indigenous workshop participants said, “We would die if we didn’t have the peer-review process.”
The scientific method used by Westerners is colonial: it emphasizes objectivity and performing experiments in the absence of outside influences. This mindset can feel alienating for many Indigenous people, who see themselves as integral parts of nature. This makes me think scientific publishing doesn’t fit an Indigenous framework.
The dense jargon and idiosyncratic structures of scientific publications make them difficult for people without a formal scientific education to jump into. Even people training to become scientists often don’t get involved in publishing until they’re in graduate school because there’s so much background knowledge that they need to have first.
If a journal article draws on Indigenous knowledge, should it include an Indigenous peer reviewer?
Perhaps, but trying to force Indigenous perspectives into a process that was created to advance Western priorities can come with its own problems. Scientific publications serve the dual purposes of disseminating information and acting as a tool of measure for scientists’ careers. Most members of Indigenous groups aren’t concerned with building up their academic CVs; in fact, some are uncomfortable with being named as authors because they see their knowledge as part of a collective body, rather than belonging solely to themselves. So do publications have the same weight for Indigenous people? Maybe not. In light of this, is participating in this system really the best use of time for Indigenous people who aren’t in academia — especially when their communities are already overtaxed with researchers’ requests for guidance through prepublication aspects of performing research in remote areas?
Indigenous communities hold a wealth of knowledge that can advance science.Credit: Galaxiid/Alamy
As an alternative to contributing to research articles, we’re considering starting a commentary section of Arctic Science. This could give more Indigenous people a venue to publish their views on the scientific process, and their observations of natural trends, in a less technical format.
Can Indigenous journal editors help to bridge the divide between Indigenous people and academic publications?
Yes, but there are very few Indigenous journal editors. Historically, editor positions have been reserved for senior scientists, and many senior scientists are white men. I’m trying to bring on more early-career scientists as editors, as this group is often more diverse. By moving away from offering these positions to only the most senior scientists, I think we’ll see a shift in demographics. At the same time, I don’t want to put the burden of bridging current divides entirely on Indigenous people. That job is for all of us.
What is Arctic Science planning to do moving forward?
My hope is to build an Indigenous advisory group that can advise Arctic Science on the peer-review process generally and consider, on a case-by-case basis, whether articles could benefit from an Indigenous peer reviewer. Beyond that, we’re still figuring out how to engage more people without being prescriptive about how they’re engaged.
What do you hope these actions will achieve?
Publications are power. Policy decisions are based on things that are written down and tangible: peer-reviewed papers and reports. Not only do scientific publications guide policy decisions, they also determine who gets money. The more you publish, and the better the journals you publish in, the more power you have.
Indigenous communities have tremendous knowledge, but much of it is passed down orally rather than published in written form. I think the fact that Indigenous representation is weak in academia, including in publishing, upholds the power imbalance that exists between Indigenous people and settlers. I want to find a better balance.
A relação entre o conhecimento genuíno e as doutrinas marginais é mais próxima do muitos querem aceitar, diz historiador especialista em história da ciência
Para as instituições científicas, essas práticas e movimentos enquadram-se na categoria das “pseudociências”. Ou seja, doutrinas baseadas em fundamentos que seus adeptos consideram científicas e, a partir daí, criam uma corrente que se afasta do que é normalmente aceito pelo mundo acadêmico.
Mas como distinguir o que é ciência daquilo que se faz passar por ciência?
Essa tarefa é muito mais complicada do que parece, segundo Michael Gordin, professor da Universidade Princeton, nos Estados Unidos, e especialista em história da ciência. Gordin é autor do livro On the Fringe: Where Science Meets Pseudoscience (“Na Fronteira: Onde a Ciência Encontra a Pseudociência”, em tradução livre).
Seu livro detalha como operam as pseudociências e como, do seu ponto de vista, são uma consequência inevitável do progresso científico.
Em entrevista à BBC News Mundo (o serviço em espanhol da BBC), Gordin detalha a complexa relação entre o que se considera ciência verdadeira e o que ele chama de doutrinas marginais.
Michael Gordin, autor do livro “Na Fronteira: Onde a Ciência Encontra a Pseudociência” (em tradução livre do inglês)
BBC News Mundo – O senhor afirma que não existe uma linha definida separando a ciência da pseudociência, mas a ciência tem um método claro e comprovável. Esta não seria uma diferença clara com relação à pseudociência?
Michael Gordin – Acredita-se normalmente que a ciência tem um único método, mas isso não é verdade. A ciência tem muitos métodos. Os geólogos fazem seu trabalho de forma muito diferente dos físicos teóricos, e os biólogos moleculares, dos neurocientistas. Alguns cientistas trabalham no campo, observando o que acontece. Outros trabalham em laboratório, sob condições controladas. Outros fazem simulações. Ou seja, a ciência tem muitos métodos, que são heterogêneos. A ciência é dinâmica, e esse dinamismo dificulta a definição dessa linha. Podemos tomar um exemplo concreto e dizer que se trata de ciência ou de pseudociência. É fácil com um exemplo concreto.
O problema é que essa linha não é consistente e, quando você observa uma maior quantidade de casos, haverá coisas que antes eram consideradas ciência e agora são consideradas pseudociências, como a astrologia. Existem temas como a deriva dos continentes, que inicialmente era considerada uma teoria marginal e agora é uma teoria básica da geofísica.
Quase tudo o que hoje se considera pseudociência já foi ciência no passado, que foi refutada com o passar do tempo e os que continuam a apoiá-la são considerados lunáticos ou charlatães. Ou seja, a definição do que é ciência ou pseudociência é dinâmica ao longo do tempo. Esta é uma das razões da dificuldade desse julgamento.
Considerada ciência no passado, a astrologia encontra-se hoje no rol das pseudociências – ou doutrinas marginais, segundo Michael Gordin
BBC News Mundo – Mas existem coisas que não se alteram ao longo do tempo. Por exemplo, 2+2 sempre foi igual a 4. Isso quer dizer que a ciência trabalha com base em princípios que não permitem interpretações…
Gordin – Bem, isso não é necessariamente certo. Dois óvnis mais dois óvnis são quatro óvnis.
É interessante que você tenha escolhido a matemática que, de fato, não é uma ciência empírica, pois ela não se refere ao mundo exterior. É uma série de regras que usamos para determinar certas coisas.
Uma das razões pelas quais é muito complicado fazer a distinção é o fato de que as doutrinas marginais observam o que é considerado ciência estabelecida e adaptam a elas seus argumentos e suas técnicas.
Um exemplo é o “criacionismo científico”, que defende que o mundo foi criado em sete dias, 6.000 anos atrás. Existem publicações de criacionismo científico que incluem gráficos matemáticos sobre as razões de decomposição de vários isótopos, para tentar comprovar que a Terra tem apenas 6.000 anos.
Seria genial afirmar que usar a matemática e apresentar gráficos é ciência, mas a realidade é que quase todas as doutrinas marginais usam a matemática de alguma forma.
Os cientistas discordam sobre o tipo de matemática utilizada, mas existem, por exemplo, pessoas que defendem que a matemática avançada utilizada na teoria das cordas já não é científica, porque perdeu a verificação empírica. Trata-se de matemática de alto nível, feita por doutores das melhores universidades, mas existe um debate interno na ciência, entre os físicos, que discutem se ela deve ou não ser considerada ciência.
Não estou dizendo que todos devem ser criacionistas, mas, quando a mecânica quântica foi proposta pela primeira vez, algumas pessoas diziam: “isso parece muito estranho”, “ela não se atém às medições da forma em que acreditamos que funcionem” ou “isso realmente é ciência?”
Nos últimos anos, popularizou-se entre alguns grupos a ideia de que a Terra é plana
BBC News Mundo – Então o sr. afirma que as pseudociências ou doutrinas marginais têm algum valor?
Gordin – A questão é que muitas coisas que consideramos inovadoras provêm dos limites do conhecimento ortodoxo.
O que quero dizer são basicamente três pontos: primeiro, que não existe uma linha divisória clara; segundo, que compreender o que fica de cada lado da linha exige a compreensão do contexto; e, terceiro, que o processo normal da ciência produz doutrinas marginais.
Não podemos descartar essas doutrinas, pois elas são inevitáveis. Elas são um produto derivado da forma como as ciências funcionam.
BBC News Mundo – Isso significa que deveríamos ser mais tolerantes com as pseudociências?
Gordin – Os cientistas, como qualquer outra pessoa, têm tempo e energia limitados e não podem pesquisar tudo.
Por isso, qualquer tempo que for dedicado a refutar ou negar a legitimidade de uma doutrina marginal é tempo que deixa de ser usado para fazer ciência — e talvez nem surta resultados.
As pessoas vêm refutando o criacionismo científico há décadas. Elas trataram de desmascarar a telepatia por ainda mais tempo e ela segue rondando à nossa volta. Existem diversos tipos de ideias marginais. Algumas são muito politizadas e chegam a ser nocivas para a saúde pública ou o meio ambiente. É a estas, a meu ver, que precisamos dedicar atenção e recursos para sua eliminação ou pelo menos explicar por que elas estão erradas.
Mas não acho que outras ideias, como acreditar em óvnis, sejam especificamente perigosas. Acredito que nem mesmo o criacionismo seja tão perigoso como ser antivacinas, ou acreditar que as mudanças climáticas são uma farsa.
Devemos observar as pseudociências como algo inevitável e abordá-las de forma pragmática. Temos uma quantidade de recursos limitada e precisamos escolher quais doutrinas podem causar danos e como enfrentá-las.
Devemos simplesmente tratar de reduzir os danos que elas podem causar? Esse é o caso da vacinação obrigatória, cujo objetivo é evitar os danos, mas sem necessariamente convencer os opositores que eles estão equivocados. Devemos persuadi-los de que estão equivocados? Isso precisa ser examinado caso a caso.
Existem em várias partes do mundo grupos que se opõem às vacinas contra a covid-19
BBC News Mundo – Como então devemos lidar com as pseudociências?
Gordin – Uma possibilidade é reconhecer que são pessoas interessadas na ciência.
Um terraplanista, por exemplo, é uma pessoa interessada na configuração da Terra. Significa que é alguém que teve interesse em pesquisar a natureza e, por alguma razão, seguiu a direção incorreta.
Pode-se então perguntar por que isso aconteceu. Pode-se abordar a pessoa, dizendo: “se você não acredita nesta evidência, em qual tipo de evidência você acreditaria?” ou “mostre-me suas evidências e vamos conversar”.
É algo que poderíamos fazer, mas vale a pena fazê-lo? É uma doutrina que não considero perigosa. Seria um problema se todos os governos do mundo pensassem que a Terra é plana, mas não vejo esse risco.
A versão contemporânea do terraplanismo surgiu há cerca de 15 anos. Acredito que os acadêmicos ainda não compreendem muito bem como aconteceu, nem por que aconteceu tão rápido.
Outra coisa que podemos fazer é não necessariamente persuadi-los de que estão equivocados, porque talvez eles não aceitem, mas tentar entender como esse movimento surgiu e se expandiu. Isso pode nos orientar sobre como enfrentar ameaças mais sérias.
As pessoas que acreditam nas doutrinas marginais muitas vezes tomam elementos da ciência estabelecida para traçar suas conclusões
BBC News Mundo – Ameaças mais sérias como os antivacinas…
Gordin – As vacinas foram inventadas no século 18, sempre houve pessoas que se opusessem a elas, em parte porque todas as vacinas apresentam risco, embora seja muito baixo.
Ao longo do tempo, a forma como se lidou com a questão foi a instituição de um sistema de seguro que basicamente diz o seguinte: você precisa receber a vacina, mas se você receber e tiver maus resultados, nós compensaremos você por esses danos.
Tenho certeza de que isso ocorrerá com a vacina contra a covid, mas ainda não conhecemos todo o espectro, nem a seriedade dos danos que ela poderá causar. Mas os danos e a probabilidade de sua ocorrência parecem ser muito baixos.
Com relação aos antivacinas que acreditam, por exemplo, que a vacina contra a covid contém um chip, a única ação que pode ser tomada para o bem da saúde pública é torná-la obrigatória. Foi dessa forma que se conseguiu erradicar a pólio na maior parte do mundo, mesmo com a existência dos opositores à vacina.
BBC News Mundo – Mas torná-la obrigatória pode fazer com que alguém diga que a ciência está sendo usada com propósitos políticos ou ideológicos…
Gordin – Tenho certeza de que, se o Estado impuser uma vacina obrigatória, alguém dirá isso. Mas não se trata de ideologia. O Estado já obriga tantas coisas e já existem vacinas que são obrigatórias.
E o Estado faz todo tipo de afirmações científicas. Não é permitido o ensino do criacionismo nas escolas, por exemplo, nem a pesquisa de clonagem de seres humanos. Ou seja, o Estado já interveio muitas vezes em disputas científicas e procura fazer isso segundo o consenso científico.
BBC News Mundo – As pessoas que adotam as pseudociências o fazem com base no ceticismo, que é exatamente um dos valores fundamentais da ciência. É um paradoxo, não?
Gordin – Este é um dos motivos por que acredito que não haja uma linha divisória clara entre a ciência e a pseudociência. O ceticismo é uma ferramenta que todos nós utilizamos. A questão é sobre qual tipo de assuntos você é cético e o que pode convencê-lo de um fato específico.
No século 19, havia um grande debate se os átomos realmente existiam ou não. Hoje, praticamente nenhum cientista duvida da sua existência. É assim que a ciência funciona. O foco do ceticismo se move de um lado para outro com o passar do tempo. Quando esse ceticismo se dirige a assuntos que já foram aceitos, às vezes ocorrem problemas, mas há ocasiões em que isso é necessário.
A essência da teoria da relatividade de Einstein é que o éter — a substância através da qual as ondas de luz supostamente viajavam — não existe. Para isso, Einstein concentrou seu ceticismo em um postulado fundamental, mas o fez dizendo que poderiam ser preservados muitos outros conhecimentos que já eram considerados estabelecidos.
Portanto, o ceticismo deve ter um propósito. Se você for cético pelo simples fato de sê-lo, este é um processo que não produz avanços.
O ceticismo é um dos princípios básicos da ciência
BBC News Mundo – É possível que, no futuro, o que hoje consideramos ciência seja descartado como pseudociência?
Gordin – No futuro, haverá muitas doutrinas que serão consideradas pseudociências, simplesmente porque existem muitas coisas que ainda não entendemos.
Existem muitas coisas que não entendemos sobre o cérebro ou o meio ambiente. No futuro, as pessoas olharão para muitas teorias e dirão que estão erradas.
Não é suficiente que uma teoria seja incorreta para que seja considerada pseudociência. É necessário que existam pessoas que acreditem que ela é correta, mesmo que o consenso afirme que se trata de um equívoco e que as instituições científicas considerem que, por alguma razão, ela é perigosa.
But the taming of the coronavirus conceals failures in public health
The Economist – Nov 8th 2021
PANDEMICS DO NOT die—they fade away. And that is what covid-19 is likely to do in 2022. True, there will be local and seasonal flare-ups, especially in chronically undervaccinated countries. Epidemiologists will also need to watch out for new variants that might be capable of outflanking the immunity provided by vaccines. Even so, over the coming years, as covid settles into its fate as an endemic disease, like flu or the common cold, life in most of the world is likely to return to normal—at least, the post-pandemic normal.
Behind this prospect lie both a stunning success and a depressing failure. The success is that very large numbers of people have been vaccinated and that, at each stage of infection from mild symptoms to intensive care, new medicines can now greatly reduce the risk of death. It is easy to take for granted, but the rapid creation and licensing of so many vaccines and treatments for a new disease is a scientific triumph.
The polio vaccine took 20 years to go from early trials to its first American licence. By the end of 2021, just two years after SARS-CoV-2 was first identified, the world was turning out roughly 1.5bn doses of covid vaccine each month. Airfinity, a life-sciences data firm, predicts that by the end of June 2022 a total of 25bn doses could have been produced. At a summit in September President Joe Biden called for 70% of the world to be fully vaccinated within a year. Supply need not be a constraint.
Immunity has been acquired at a terrible cost
Vaccines do not offer complete protection, however, especially among the elderly. Yet here, too, medical science has risen to the challenge. For example, early symptoms can be treated with molnupiravir, a twice-daily antiviral pill that in trials cut deaths and admissions to hospital by half. The gravely ill can receive dexamethasone, a cheap corticosteroid, which reduces the risk of death by 20-30%. In between are drugs like remdesivir and an antibody cocktail made by Regeneron.
Think of the combination of vaccination and treatment as a series of walls, each of which blocks a proportion of viral attacks from becoming fatal. The erection of each new wall further reduces the lethality of covid.
However, alongside this success is that failure. One further reason why covid will do less harm in the future is that it has already done so much in the past. Very large numbers of people are protected from current variants of covid only because they have already been infected. And many more, particularly in the developing world, will remain unprotected by vaccines or medicines long into 2022.
This immunity has been acquired at terrible cost. The Economist has tracked excess deaths during the pandemic—the mortality over and above what you would have expected in a normal year. Our central estimate on October 22nd was of a global total of 16.5m deaths (with a range from 10.2m to 19.2m), which was 3.3 times larger than the official count. Working backwards using assumptions about the share of fatal infections, a very rough estimate suggests that these deaths are the result of 1.5bn-3.6bn infections—six to 15 times the recorded number.
The combination of infection and vaccination explains why in, say, Britain in the autumn, you could detect antibodies to covid in 93% of adults. People are liable to re-infection, as Britain shows, but with each exposure to the virus the immune system becomes better trained to repel it. Along with new treatments and the fact that more young people are being infected, that explains why the fatality rate in Britain is now only a tenth of what it was at the start of 2021. Other countries will also follow that trajectory on the road to endemicity.
All this could yet be upended by a dangerous new variant. The virus is constantly mutating and the more of it there is in circulation, the greater the chance that an infectious new strain will emerge. However, even if Omicron and Rho variants strike, they may be no more deadly than Delta is. In addition, existing treatments are likely to remain effective, and vaccines can rapidly be tweaked to take account of the virus’s mutations.
Just another endemic disease
Increasingly, therefore, people will die from covid because they are elderly or infirm, or they are unvaccinated or cannot afford medicines. Sometimes people will remain vulnerable because they refuse to have a jab when offered one—a failure of health education. But vaccine doses are also being hoarded by rich countries, and getting needles into arms in poor and remote places is hard. Livelihoods will be ruined and lives lost all for lack of a safe injection that costs just a few dollars.
Covid is not done yet. But by 2023, it will no longer be a life-threatening disease for most people in the developed world. It will still pose a deadly danger to billions in the poor world. But the same is, sadly, true of many other conditions. Covid will be well on the way to becoming just another disease.
Edward Carr: Deputy editor, The Economist■
This article appeared in the Leaders section of the print edition of The World Ahead 2022 under the headline “Burning out”
Não existe multipartidarismo sobre a lei da gravidade ou pluralismo ideológico acerca da teoria da evolução
30.out.2021 às 7h00
O veterano meteorologista Luiz Carlos Molion é uma espécie de pregador itinerante do evangelho da pseudociência. Alguns anos atrás, andou rodando o interiorzão do país, a soldo de uma concessionária de tratores, oferecendo sua bênção pontifícia às 12 tribos do agro. Em suas palestras, garantia às tropas de choque da fronteira agrícola brasileira que desmatar não interfere nas chuvas (errado), que as emissões de gás carbônico não esquentam a Terra (errado) e que, na verdade, estamos caminhando para uma fase de resfriamento global (errado).
Existem formas menos esquálidas de terminar uma carreira que se supunha científica, mas parece que Molion realmente acredita no que fala. O que realmente me parece inacreditável, no entanto, é que um periódico científico editado pela maior universidade da América Latina abra suas portas para as invectivas de um ex-pesquisador como ele.
Foi o que aconteceu na última edição da revista Khronos, editada pelo Centro Interunidade de História da Ciência da USP. Numa seção intitulada “Debates”, Molion publicou o artigo “Aquecimento global antropogênico: uma história controversa”. Nele, Molion requenta (perdoai, Senhor, a volúpia dos trocadilhos) sua mofada marmita de negacionista, atacando a suposta incapacidade do IPCC, o painel do clima da ONU, de prever com acurácia o clima futuro deste planetinha usando modelos computacionais (errado).
O desplante era tamanho que provocou protestos formais de diversos pesquisadores de prestígio da universidade, membros do conselho do centro uspiano. Na carta assinada por eles e outros colegas, lembram que Molion não tem quaisquer publicações relevantes em revistas científicas sobre o tema das mudanças climáticas há décadas e que, para cúmulo da sandice, o artigo nem faz referência… à história da ciência, que é o tema da publicação, para começo de conversa.
A resposta do editor do periódico Khronos e diretor do centro, Gildo Magalhães, não poderia ser mais desanimadora. Diante do protesto dos professores, eis o que ele disse: “Não cabe no ambiente acadêmico fazer censura a ideias. Na universidade não deveria haver partido único. Quem acompanha os debates nos congressos climatológicos internacionais sabe que fora da grande mídia o aquecimento global antropogênico é matéria cientificamente controversa. Igualar sumariamente uma opinião diversa da ortodoxa com o reles negacionismo científico, como foi o caso no Brasil com a vacina contra a Covid, prejudica o entendimento e em nada ajuda o diálogo”.
Não sei se Magalhães está mentindo deliberadamente ou é apenas muito mal informado, mas a afirmação de que o aquecimento causado pelo homem é matéria controversa nos congressos da área é falsa. O acompanhamento dos periódicos científicos mostra de forma inequívoca que questionamentos como os de Molion não são levados a sério por praticamente ninguém. A controvérsia citada por Magalhães inexiste.
É meio constrangedor ter de explicar isso para um professor da USP, mas as referências a debate de “ideias” e “partido único” não têm lugar na ciência. Se as suas ideias não são baseadas em experimentos e observações feitos com rigor e submetidos ao crivo de outros membros da comunidade científica, elas não deveriam ter lugar num periódico acadêmico. Não existe multipartidarismo sobre a lei da gravidade ou pluralismo ideológico acerca da teoria da evolução. Negar isso é abrir a porteira para retrocessos históricos.
In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.
Erika Check Hayden
February 26, 2020
To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.
The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.
Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.”
Antisense drug
Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.
A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.”
Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.
Timothy Yu, of Boston Children’s HospitalCourtesy Photo (Yu)
Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.
That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.
Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.
Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.”
Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.
Source code
As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.
Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.
Ipek may not survive past her 20s without treatment.Matthew Monteith
While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.
Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.
Custom drug
By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.
After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.
One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.”
Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.”
Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.
Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.
The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.
Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.
Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.
Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.”
Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe.
But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities.
So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve.
But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure.
With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing.
We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him.
Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix.
The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.”
Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor.
Then, one day, strange lights appear in a corner of the ceiling.
Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines.
Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order.
Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor.
In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor.
As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it?
The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history.
Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description.
From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry.
But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone.
Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting.
Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built.
Regaining that old feeling of disorientation now requires some extra steps.
As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe.
Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians.
Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes.
Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.”
Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future.
The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.”
Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave.
The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.”
But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes.
Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes.
If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here.
Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out.
Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise.
Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care.
Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive.
This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal:
I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me.
The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics.
Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection.
Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.”
Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes.
When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles.
Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes.
For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.”
Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well.
As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely.
There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood.
Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements.
David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124.
Alex Wong / Chet Strange/ Sarah Silbiger / Bloomberg / Getty / The Atlantic
When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.
One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.
The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.
This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.
Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.
The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.
Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.
The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.
Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.
Risk Compensation
One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.
The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.
But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefitsoverwhelmthe ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predictedthat wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmedexactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.
Rules in Place of Mechanisms and Intuitions
Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.
In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.
All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.
Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.
Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.
After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.
Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.
Outdoors? It’s the opposite.
There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.
And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.
We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.
Scolding and Shaming
Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.
Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?
On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.
While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditionsnecessary for more people to keep themselves safe.
And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.
Harm Reduction
Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.
As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””
Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.
Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.
And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.
The Balance Between Knowledge And Action
Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.
Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.
Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.
And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.
At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.
Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.
Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”
Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.
As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.
Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.
These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.
And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.
Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.
Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.
So why isn’t this story more widely appreciated?
Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.
Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”
So we didn’t get our initial vaccine jubilation.
But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.
People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.
Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.
As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.
And yet, from the beginning, a good chunk of the public-facing messaging and news articles implied or claimed that vaccines won’t protect you against infecting other people or that we didn’t know if they would, when both were false. I found myself trying to convince people in my own social network that vaccines weren’t useless against transmission, and being bombarded on social media with claims that they were.
What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.
A year into the pandemic, we’re still repeating the same mistakes.
The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.
Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.
An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.
You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.
There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.
And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.
The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.
Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.
The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.
Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associatedwith higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.
Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.
For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.
New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”
Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.
Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.
But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.
Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.
Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.
And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.
But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.
Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.
Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.
We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.
Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”
Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.
The world’s continuously warming climate is revealed also in contemporary ice melt at glaciers, such as with this one in the Kenai mountains, Alaska (seen September 2019). Photograph: Joe Raedle/Getty Images
The planet is hotter now than it has been for at least 12,000 years, a period spanning the entire development of human civilisation, according to research.
Analysis of ocean surface temperatures shows human-driven climate change has put the world in “uncharted territory”, the scientists say. The planet may even be at its warmest for 125,000 years, although data on that far back is less certain.
The research, published in the journal Nature, reached these conclusions by solving a longstanding puzzle known as the “Holocene temperature conundrum”. Climate models have indicated continuous warming since the last ice age ended 12,000 years ago and the Holocene period began. But temperature estimates derived from fossil shells showed a peak of warming 6,000 years ago and then a cooling, until the industrial revolution sent carbon emissions soaring.
This conflict undermined confidence in the climate models and the shell data. But it was found that the shell data reflected only hotter summers and missed colder winters, and so was giving misleadingly high annual temperatures.
“We demonstrate that global average annual temperature has been rising over the last 12,000 years, contrary to previous results,” said Samantha Bova, at Rutgers University–New Brunswick in the US, who led the research. “This means that the modern, human-caused global warming period is accelerating a long-term increase in global temperatures, making today completely uncharted territory. It changes the baseline and emphasises just how critical it is to take our situation seriously.”
The world may be hotter now than any time since about 125,000 years ago, which was the last warm period between ice ages. However, scientists cannot be certain as there is less data relating to that time.
One study, published in 2017, suggested that global temperatures were last as high as today 115,000 years ago, but that was based on less data.
The new research is published in the journal Nature and examined temperature measurements derived from the chemistry of tiny shells and algal compounds found in cores of ocean sediments, and solved the conundrum by taking account of two factors.
First, the shells and organic materials had been assumed to represent the entire year but in fact were most likely to have formed during summer when the organisms bloomed. Second, there are well-known predictable natural cycles in the heating of the Earth caused by eccentricities in the orbit of the planet. Changes in these cycles can lead to summers becoming hotter and winters colder while average annual temperatures change only a little.
Combining these insights showed that the apparent cooling after the warm peak 6,000 years ago, revealed by shell data, was misleading. The shells were in fact only recording a decline in summer temperatures, but the average annual temperatures were still rising slowly, as indicated by the models.
“Now they actually match incredibly well and it gives us a lot of confidence that our climate models are doing a really good job,” said Bova.
The study looked only at ocean temperature records, but Bova said: “The temperature of the sea surface has a really controlling impact on the climate of the Earth. If we know that, it is the best indicator of what global climate is doing.”
She led a research voyage off the coast of Chile in 2020 to take more ocean sediment cores and add to the available data.
Jennifer Hertzberg, of Texas A&M University in the US, said: “By solving a conundrum that has puzzled climate scientists for years, Bova and colleagues’ study is a major step forward. Understanding past climate change is crucial for putting modern global warming in context.”
Lijing Cheng, at the International Centre for Climate and Environment Sciences in Beijing, China, recently led a study that showed that in 2020 the world’s oceans reached their hottest level yet in instrumental records dating back to the 1940s. More than 90% of global heating is taken up by the seas.
Cheng said the new research was useful and intriguing. It provided a method to correct temperature data from shells and could also enable scientists to work out how much heat the ocean absorbed before the industrial revolution, a factor little understood.
The level of carbon dioxide today is at its highest for about 4m years and is rising at the fastest rate for 66m years. Further rises in temperature and sea level are inevitable until greenhouse gas emissions are cut to net zero.
As the next blockbuster science report on cutting emissions goes to governments for review, critics say it downplays the obstructive role of fossil fuel lobbying
Darth Vader: What would Star Wars be without its villain? (Pic: Pixabay)
On Monday, a weighty draft report on how to halt and reverse human-caused global warming will hit the inboxes of government experts. This is the final review before the Intergovernmental Panel on Climate Change (IPCC) issues its official summary of the science.
While part of the brief was to identify barriers to climate action, critics say there is little space given to the obstructive role of fossil fuel lobbying – and that’s a problem.
Robert Brulle, an American sociologist who has long studied institutions that promote climate denial, likened it to “trying to tell the story of Star Wars, but omitting Darth Vader”.
Tweeting in November, Brulle explained he declined an invitation to contribute to the working group three (WG3) report. “It became clear to me that institutionalized efforts to obstruct climate action was a peripheral concern. So I didn’t consider it worth engaging in this effort. It really deserves its own chapter & mention in the summary.”
In an email exchange with Climate Home News, Brulle expressed a hope the final version would nonetheless reflect his feedback. The significance of obstruction efforts should be reflected in the summary for policymakers and not “buried in an obscure part of the report,” he wrote.
His tweet sparked a lively conversation among scientists, with several supporting his concerns and others defending the IPCC, which aims to give policymakers an overview of the scientific consensus.
David Keith, a Harvard researcher into solar geoengineering, agreed the IPCC “tells a bloodless story, and abstract numb version of the sharp political conflict that will shape climate action”.
Social ecology and ecological economics professor Julia Steinberger, a lead author on WG3, said “there is a lot of self-censorship” within the IPCC. Where authors identify enemies of climate action, like fossil fuel companies, that content is “immediately flagged as political or normative or policy-prescriptive”.
The next set of reports is likely to be “a bit better” at covering the issue than previous efforts, Steinberger added, “but mainly because the world and outside publications have overwhelmingly moved past this, and the IPCC is catching up: not because the IPCC is leading.”
Politics professor Matthew Paterson was a lead author on WG3 for the previous round of assessment reports, published in 2014. He told Climate Home that Brulle is “broadly right” lobbying hasn’t been given enough attention although there is a “decent chunk” in the latest draft on corporations fighting for their interests and slowing down climate action.
Paterson said this was partly because the expertise of authors didn’t cover fossil fuel company lobbying and partly because governments would oppose giving the subject greater prominence. “Not just Saudi Arabia,” he said. “They object to everything. But the Americans [and others too]”.
While the IPCC reports are produced by scientists, government representatives negotiate the initial scope and have some influence over how the evidence is summarised before approving them for publication. “There was definitely always a certain adaptation – or an internalised sense of what governments are and aren’t going to accept – in the report,” said Paterson.
The last WG3 report in 2014 was nearly 1,500 pages long. Lobbying was not mentioned in its 32-page ‘summary for policymakers’ but lobbying against carbon taxes is mentioned a few times in the full report.
On page 1,184, the report says some companies “promoted climate scepticism by providing financial resources to like-minded think-tanks and politicians”. The report immediately balances this by saying “other fossil fuel companies adopted a more supportive position on climate science”.
One of the co-chairs of WG3, Jim Skea, rejected the criticisms as “completely unfair”. He told Climate Home News: “The IPCC produces reports very slowly because the whole cycle lasts seven years… we can’t respond on a 24/7 news cycle basis to ideas that come up.”
Skea noted there was a chapter on policies and institutions in the 2014 report which covered lobbying from industry and from green campaigners and their influence on climate policy. “The volume of climate change mitigation literature that comes out every year is huge and I would say that the number of references to articles which talk about lobbying of all kinds – including industrial lobbying and whether people had known about the science – it is in there and about the right proportions”, he said.
“We’re not an advocacy organisation, we’re a scientific organisation, it’s not our job to take up arms and take one side or another” he said. “That’s the strength of the IPCC. If if oversteps its role, it will weaken its influence” and “undermine the scientific statements it makes”.
A broader, long-running criticism of the IPCC is that it downplays subjects like political science, development studies, sociology and anthropology and over-relies on economists and the people who put together ‘integrated assessment models’ (IAMs), which attempt to answer big questions like how the world can keep to 1.5C of global warming.
Paterson said the IPCC is “largely dominated by large-scale modellers or economists and the representations of others sorts of social scientists’ expertise is very thin”. A report he co-authored on the social make-up of that IPCC working group found that nearly half the authors were engineers or economists but just 15% were from social sciences other than economics. This dominance was sharper among the more powerful authors. Of the 35 Contributing Lead Authors, 20 were economists or engineers, there was one each from political science, geography and law and none from the humanities.
Wim Carton, a lecturer in the political economy of climate change mitigation at Lund University, said that the IPCC (and scientific research in general) has been caught up in “adulation” of IAMs and this has led to “narrow techno-economic conceptualisations of future mitigation pathways”.
Skea said that there has been lots of material on political science and international relations and even “quite a bit” on moral philosophy. He told Climate Home: “It’s not the case that IPCC is only economics and modelling. Frankly, a lot of that catches attention because these macro numbers are eye-catching. There’s a big difference in the emphasis in [media] coverage of IPCC reports and the balance of materials when you go into the reports themselves.”
According to Skea’s calculations, the big models make up only 6% of the report contents, about a quarter of the summary and the majority of the press coverage. “But there’s an awful lot of bread-and-butter material in IPCC reports which is just about how you get on with it,” he added. “It’s not sexy material but it’s just as important because that’s what needs to be done to mitigate climate change.”
While saying their dominance had been amplified by the media, Skea defended the usefulness of IAMs. “Our audience are governments. Their big question is how you connect all this human activity with actual impacts on the climate. It’s very difficult to make that leap without actually modelling it. You can’t do it with lots of little micro-studies. You need models and you need scenarios to think your way through that connection.”
The IPCC has also been accused of placing too much faith in negative emissions technologies and geo-engineering. Carton calls these technologies ‘carbon unicorns’ because he says they “do not exist at any meaningful scale” and probably never will.
In a recent book chapter, Carton argues: “If one is to believe recent IPCC reports, then gone are the days when the world could resolve the climate crisis merely by reducing emissions. Avoiding global warming in excess of 2°C/1.5°C now also involves a rather more interventionist enterprise: to remove vast amounts of carbon dioxide from the atmosphere, amounts that only increase the longer emissions refuse to fall.”
When asked about carbon capture technologies, Skea said that in terms of deployment, “they haven’t moved on very much” since the last big IPCC report in 2014. He added that carbon capture and storage and bio-energy are “all things that have been done commercially somewhere in the world.”
“What has never been done”, he said, “is to connect the different parts of the system together and run them over all. That’s led many people looking at the literature to conclude that the main barriers to the adoption of some technologies are the lack of policy incentives and the lack of working out good business models to put what would be complex supply chains together – rather than anything that’s standing in the way technically.”
The next set of three IPCC assessment reports was originally due to be published in 2021, but work was delayed by the coronavirus pandemic. Governments and experts will have from 18 January to 14 March to read and comment on the draft for WG3. Dates for a final government review have yet to be set.
Introduction: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. […]
Marian Bantjes
Introduction:
Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.
Does big data have the answers? Maybe some, but not all, says Mark Graham
In 2008, Chris Anderson, then editor of Wired, wrote a provocative piece titled The End of Theory. Anderson was referring to the ways that computers, algorithms, and big data can potentially generate more insightful, useful, accurate, or true results than specialists or domain experts who traditionally craft carefully targeted hypotheses and research strategies.
This revolutionary notion has now entered not just the popular imagination, but also the research practices of corporations, states, journalists and academics. The idea being that the data shadows and information trails of people, machines, commodities and even nature can reveal secrets to us that we now have the power and prowess to uncover.
In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.
It is quite likely that you yourself have been the unwitting subject of a big data experiment carried out by Google, Facebook and many other large Web platforms. Google, for instance, has been able to collect extraordinary insights into what specific colours, layouts, rankings, and designs make people more efficient searchers. They do this by slightly tweaking their results and website for a few million searches at a time and then examining the often subtle ways in which people react.
Most large retailers similarly analyse enormous quantities of data from their databases of sales (which are linked to you by credit card numbers and loyalty cards) in order to make uncanny predictions about your future behaviours. In a now famous case, the American retailer, Target, upset a Minneapolis man by knowing more about his teenage daughter’s sex life than he did. Target was able to predict his daughter’s pregnancy by monitoring her shopping patterns and comparing that information to an enormous database detailing billions of dollars of sales. This ultimately allows the company to make uncanny predictions about its shoppers.
More significantly, national intelligence agencies are mining vast quantities of non-public Internet data to look for weak signals that might indicate planned threats or attacks.
There can by no denying the significant power and potentials of big data. And the huge resources being invested in both the public and private sectors to study it are a testament to this.
However, crucially important caveats are needed when using such datasets: caveats that, worryingly, seem to be frequently overlooked.
The raw informational material for big data projects is often derived from large user-generated or social media platforms (e.g. Twitter or Wikipedia). Yet, in all such cases we are necessarily only relying on information generated by an incredibly biased or skewed user-base.
Gender, geography, race, income, and a range of other social and economic factors all play a role in how information is produced and reproduced. People from different places and different backgrounds tend to produce different sorts of information. And so we risk ignoring a lot of important nuance if relying on big data as a social/economic/political mirror.
We can of course account for such bias by segmenting our data. Take the case of using Twitter to gain insights into last summer’s London riots. About a third of all UK Internet users have a twitter profile; a subset of that group are the active tweeters who produce the bulk of content; and then a tiny subset of that group (about 1%) geocode their tweets (essential information if you want to know about where your information is coming from).
Despite the fact that we have a database of tens of millions of data points, we are necessarily working with subsets of subsets of subsets. Big data no longer seems so big. Such data thus serves to amplify the information produced by a small minority (a point repeatedly made by UCL’s Muki Haklay), and skew, or even render invisible, ideas, trends, people, and patterns that aren’t mirrored or represented in the datasets that we work with.
Big data is undoubtedly useful for addressing and overcoming many important issues face by society. But we need to ensure that we aren’t seduced by the promises of big data to render theory unnecessary.
We may one day get to the point where sufficient quantities of big data can be harvested to answer all of the social questions that most concern us. I doubt it though. There will always be digital divides; always be uneven data shadows; and always be biases in how information and technology are used and produced.
And so we shouldn’t forget the important role of specialists to contextualise and offer insights into what our data do, and maybe more importantly, don’t tell us.
Illustration: Marian Bantjes“All models are wrong, but some are useful.”
So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all.
Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.
The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.
At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.
Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.
Speaking at the O’Reilly Emerging Technology Conference this past March, Peter Norvig, Google’s research director, offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.”
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
The big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.
Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.
But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the “beautiful story” phase of a discipline starved of data) is that we don’t know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.
Now biology is heading in the same direction. The models we were taught in school about “dominant” and “recessive” genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton’s laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility.
In short, the more we learn about biology, the further we find ourselves from a model that can explain it.
There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.
The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.
If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.
This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.
This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM’s Tivoli and open source versions of Google File System and MapReduce.111 Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.
Learning to use a “computer” of this scale may be challenging. But the opportunity is great: The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.
There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?
Você precisa fazer login para comentar.