A perversidade do negacionismo recai em jurar que se está dizendo o contrário do que de fato se diz. Nesta novilíngua, negacionismo veste o sapatênis do antialarmismo. Chega a ser tedioso, posto que mofado, o argumento de Leandro Narloch nesta Folha na terça (10). Mofado pois —como relata Michael Mann em “The New Climate War”— não passa da mesma retórica negacionista 2.0.
Em essência, Narloch defende que há atividades nocivas ao clima que devem ser “celebradas e difundidas” por nos tornar “menos vulneráveis à natureza”. Narloch está cientificamente errado. E o faz subscrevendo a uma das formas mais nefárias de negacionismo: mascara-o, vendendo soluções que não só não são capazes de mitigar e adaptar as sociedades à crise climática como possuem o efeito adverso. Implode-se a Amazônia para salvá-la, eis o argumento.
Esses e outros discursos negacionistas já tinham sido mapeados na revista Global Sustaintability, de Cambridge, em julho de 2020: não são novos. Em vez de mexer em tabus do século 21, vendem-se inverdades como se ciência fosse. Narloch erra no conceito de vulnerabilidade: dos incêndios florestais na Califórnia às inundações na Alemanha, não estamos protegidos contra a natureza porque nela estamos inseridos. Ignora, ademais, a vasta literatura do Painel do Clima sobre vulnerabilidade.
Narloch desconsidera o conceito da ciência climática de “feedback loops”: a crise climática aciona uma série de gatilhos de dimensão incalculável, uma reação de cadeia nunca vista. Destruir o clima não nos protegerá do clima, porque é a ausência de uma mudança drástica energética que tem aprofundado a crise climática. É ineficiente o investir no contrário.
Se o relatório do Painel do Clima acendeu o sinal vermelho, não é com desinformação que o jornalismo contribuirá ao tema. Pluralismo é um rio onde as ideias se movem dentro das margens da verdade e da ciência. Não reclamem quando o rio secar, implodindo as margens que o jornalismo deveria ter protegido.
Na sua opinião, o que aconteceu nos últimos cem anos com o número total de mortes causadas por furacões, inundações, secas, ondas de calor e outros desastres climáticos? Peço que escolha uma destas alternativas:
a) Aumentou mais de 800%
b) Aumentou cerca de 50%
c) Manteve-se constante
d) Diminuiu cerca de 50%
e) Diminuiu mais de 80%
Como a população mundial cresceu de 1,8 bilhão em 1921 para 8 bilhões em 2021, é razoável cravar as respostas B ou C, pois o fato de haver mais pessoas resultaria em mais vítimas. Muitos leitores devem ter escolhido a primeira opção, tendo em vista as notícias assustadoras do relatório do IPCC desta semana.
A alternativa correta, porém, é a última. As mortes por desastres naturais diminuíram 87% desde a década de 1920 até os anos 2010, segundo dados coletados pelo Our World in Data.
Passaram de 540 mil por ano para 68 mil. A taxa em relação à população teve picos de 63 mortes por 100 mil habitantes em 1921, e 176 em 1931. Hoje está em 0,15.
Esses números levam a dois paradoxos interessantes sobre a relação entre o homem e o clima. O primeiro lembra o Paradoxo de Spencer –referência a Herbert Spencer, para quem “o grau de preocupação pública sobre um problema ou fenômeno social varia inversamente a sua incidência”.
Assim como os ingleses se deram conta da pobreza quando ela começava a diminuir, durante a Revolução Industrial, a humanidade está apavorada com os infortúnios do clima justamente depois de conseguir sobreviver a eles.
O segundo paradoxo: ao mesmo tempo em que emitimos muito (mas muito mesmo) carbono na atmosfera e causamos um grave problema de efeito estufa, também nos tornamos menos vulneráveis à natureza. Na verdade, proteger-se do clima foi um dos principais motivos para termos poluído tanto.
Veja o caso da construção. Produzir cimento consiste grosseiramente em queimar calcário e liberar dióxido de carbono.
Se a indústria de cimento fosse um país, seria o terceiro maior emissor de gases do efeito estufa. Mas essa indústria poluidora permitiu que as pessoas deixassem casas de pau-a-pique ou madeira para dormirem abrigadas em estruturas mais seguras.
Já a fome originada pela seca, principal causa de morte por desastres naturais nos anos 1920, foi resolvida com a criação dos fertilizantes químicos, sistemas de irrigação e a construção de represas e redes de saneamento.
Todas essas atividades causaram aquecimento global –mas não deixam de ser grandes conquistas humanas, que merecem ser celebradas e difundidas entre os pobres que ainda vivem sob risco de morrer durante furacões, secas ou inundações.
Será que a queda histórica das mortes por desastres naturais vai se reverter nos próximos anos, tornando realidade os vaticínios apocalípticos de Greta Thunberg, para quem “bilhões de pessoas morrerão se não tomarmos medidas urgentes”?
O ativista climático Michael Shellenberger, autor do brilhante “Apocalipse Nunca”, que será lançado este mês no Brasil pela editora LVM, acha que não.
Pretendo falar mais sobre o livro de Shellenberger em outras colunas, mas já adianto um dos argumentos: o alarmismo ambiental despreza a capacidade humana de se adaptar e resolver problemas.
“Os Países Baixos, por exemplo, tornaram-se uma nação rica mesmo tendo um terço de suas terras abaixo do nível do mar, incluindo áreas que estão nada menos do que sete metros abaixo do mar”, diz ele.
A luta contra o aquecimento global não precisa de ativistas obcecados com o apocalipse (que geralmente desprezam soluções óbvias, como a energia nuclear). Precisa de tecnologia, de inovadores, de gente que dê mais conforto e segurança à humanidade interferindo na natureza cada vez menos.
Google, Nespresso, Amazon e Magalu. Na chamada economia da atenção, a concorrência hoje é, cada vez mais, entre ecossistemas, geralmente capitaneados por uma grande empresa e que abrigam várias organizações em uma rede de dependência e complementaridade.
Ganha quem conseguir satisfazer mais necessidades dos consumidores dentro do mesmo sistema. Para usar o jargão, quem consegue oferecer uma proposição de valor superior.
A ideia em si não é tão nova assim. O impulso veio com a economia digital, mas é possível identificar ecossistemas nos mais diversos contextos, do mundo do futebol e do crime aos sistemas sociais de educação e saúde. Inclusive no conglomerado de organizações que tem se dedicado ao combate à pandemia, que inclui atores do setor privado (como no caso da recente compra do kit intubação) e que deveria ter sido adequadamente capitaneado pelo governo federal.
Mas cá estamos, rumo a meio milhão de mortos. Bolsonaro poderia ter saído como herói da coisa toda, como Bibi em Israel, mas, vivendo da lógica de bunker, preferiu jogar areia nessas engrenagens desde o início, enquanto o Brasil regride institucionalmente a olhos vistos.
Curiosamente, isso não tem sido suficiente para corroer o lastro que o presidente mantém no pedaço conservador de Brasil, que tem racionalizado sem grandes dificuldades o mar de chorume produzido pela covid.
Encarar o bolsonarismo como ecossistema –mais do que um movimento social apoiado por um exército digital– ajuda a entender o fenômeno. Primeiro porque, como sabemos, a atenção das pessoas se tornou superfragmentada e o mundo não anda fácil de ser entendido.
Ecossistemas político-sociais levam vantagem quando conseguem satisfazer uma necessidade humana básica, o conforto das grandes certezas. Uma boa e sólida certeza vale como um barbitúrico irresistível, dizia Nelson Rodrigues. Em um país com nível educacional baixo, essas certezas podem se dar ao luxo de sapatear na cara da realidade.
O bolsonarismo também dá de bandeja aos seguidores uma identidade carregada de tintas morais e, novamente, não há nada de novo aqui –basta lembrar de exemplos próximos, como o chavismo e o lulopetismo. Em outras palavras, o sujeito se sente superior e ganha uma tribo para chamar de sua.
É essa a atual proposição de valor do ecossistema criado em torno do presidente. Não é pouco, ainda que o conjunto já tenha tido mais força quando esgrimia o discurso contra a corrupção e a lábia liberal.
Em torno desse valor, diversos segmentos se agrupam. Tem aquilo que reportagem no El País chamou de QAnon tupiniquim, gente produzindo fake news e usando robôs para influenciar o discurso nas redes sociais.
Tem aquele segmento empresarial “raiz”, madeireiros na Amazônia, por exemplo, fora aquelas grandes empresas que, assim como o Centrão, estão quase sempre à disposição para uma ovacionada, no matter what.
Tem os políticos, os apoiadores de nicho (como os atiradores), os produtores de conteúdo lacrador, os canais de comunicação e parte (presumo) dos militares e policiais. E se a mexerica toda perdeu os lavajatistas, ganhou de presente um gomo suculento que tem sido crucial para sua resiliência, o dos médicos e influenciadores cloroquiners.
Cada segmento desses têm recursos e competências que usa em prol da causa. Por exemplo, a audiência cativa de uma rádio ou a credibilidade extraterrestre que os brasileiros atribuem aos médicos, mesmo que sejam leigos em medicina baseada em evidências.
Cada um deles desempenha atividades diversas mas complementares, reforçando a proposição de valor (lembremos: grandes certezas e identidade moral superior). A lista é longa e inclui organização de protestos, veiculação de programas de opinião em rádio e os encontros empresariais que lustram a legitimidade do governo com o gel do capitalismo de compadrio.
No que é crítico, cada segmento se apropria de uma parte do valor gerado pelo conjunto. Políticos se apropriam de capital eleitoral. Emissoras, de exclusivas com o presidente e audiência. Médicos cloroquiners ganham chuvas de pacientes. Influenciadores e manipuladores de conteúdo ganham seguidores ou, como suspeita a CPMI das fake news, empregos em gabinetes. Entidades empresariais mantem abertos os canais com Brasília. Os mortos são só um detalhe incômodo na paisagem.
Minha percepção é que a disputa de 2022 deve ocorrer mais nesse nível amplificado. Concorrentes precisam começar a colocar de pé seus ecossistemas desde já, de preferência em torno de valores mais racionais e menos divisivos. Não vai ser fácil.
The U.S. National Academy of Sciences has published a new report (“Reflecting Sunlight“) on the topic of Geoengineering (that is, the deliberate manipulation of the global Earth environment in an effort to offset the effects of human carbon pollution-caused climate change). While I am, in full disclosure, a member of the Academy, I offer the following comments in an entirely independent capacity:
Let me start by congratulating the authors on their comprehensive assessment of the science. It is solid as we would expect, since the author team and reviewers cover that well in their expertise. The science underlying geoengineering is the true remit of the study. Chris Field , the lead author, is a duly qualified person to lead the effort, and did a good job making sure that intricacies of the science are covered, including the substantial uncertainties and caveats when it comes to the potential environmental impacts of some of the riskier geoengineering strategies (i.e. stratosphere sulphate aerosol injection to block out sunlight).
I like the fact that there is a discussion of the importance of labels and terminology and how this can impact public perception. For example, the oft-used term “solar radiation management” is not favored by the report authors, as it can be misleading (we don’t have our hand on a dial that controls solar output). On the other hand, I think that the term they do chose to use “solar geoengineering”, is still potentially problematic, because it still implies we’re directly modify solar output—but that’s not the case. We’re talking about messing with Earth’s atmospheric chemistry, we’re not dialing down the sun, even though many of the modeling experiments assume that’s what we’re doing. It’s a bit of a bait and switch. Even the title of the report, “Reflecting Sunlight” falls victim to this biased framing.
“They don’t actually put aerosols in the atmosphere. They turn down the Sun to mimic geoengineering. You might think that is relatively unimportant . . . [but] controlling the Sun is effectively a perfect knob. We know almost precisely how a reduction in solar flux will project onto the energy balance of a planet. Aerosol-climate interactions are much more complex.”
I have a deeper and more substantive concern though, and it really is about the entire framing of the report. A report like this is as much about the policy message it conveys as it is about the scientific assessment, for it will be used immediately by policy advocates. And here I’m honestly troubled at the fodder it provides for mis-framing of the risks.
I recognize that the authors are dealing with a contentious and still much-debated topic, and it’s a challenge to represent the full range of views within the community, but the opening of the report itself, in my view, really puts a thumb on the scales. It falls victim to the moral hazard that I warn about in “The New Climate War” when it states, as justification for potentially considering implementing these geoengineering schemes:
But despite overwhelming evidence that the climate crisis is real and pressing, emissions of greenhouse gases continue to increase, with global emissions of fossil carbon dioxide rising 10.8 percent from 2010 through 2019. The total for 2020 is on track to decrease in response to decreased economic activity related to the COVID-19 pandemic. The pandemic is thus providing frustrating confirmation of the fact that the world has made little progress in separating economic activity from carbon dioxide emissions.
First of all, the discussion of carbon emissions reductions there is misleading. Emissions flattened in the years before the pandemic, and the International Energy Agency (IEA) specifically attributed that flattening to a decrease in carbon emissions globally in the power generation sector. These reductions continue on and contributed at least party to the 7% decrease in global emissions last year. We will certainly need policy interventions favoring further decarbonization to maintain that level of decrease year after year, but if we can do that, we remain on a path to limiting warming below dangerous levels (decent chance less than 1.5C and very good chance less than 2C) without resorting on very risky geoengineering schemes. It is a matter of political willpower, not technology–we have the technology now necessary to decarbonize our economy.
The authors are basically arguing that because carbon reductions haven’t been great enough (thanks to successful opposition by polluters and their advocates) we should consider geoengineering. That framing (unintentionally, I realize) provides precisely the crutch that polluters are looking for.
As I explain in the book:
A fundamental problem with geoengineering is that it presents what is known as a moral hazard, namely, a scenario in which one party (e.g., the fossil fuel industry) promotes actions that are risky for another party (e.g., the rest of us), but seemingly advantageous to itself. Geoengineering provides a potential crutch for beneficiaries of our continued dependence on fossil fuels. Why threaten our economy with draconian regulations on carbon when we have a cheap alternative? The two main problems with that argument are that (1) climate change poses a far greater threat to our economy than decarbonization, and (2) geoengineering is hardly cheap—it comes with great potential harm.
So, in short, this report is somewhat of a mixed bag. The scientific assessment and discussion is solid, and there is a discussion of uncertainties and caveats in the detailed report. But the spin in the opening falls victim to moral hazard and will provide fodder for geoengineering advocates to use in leveraging policy decision-making.
When the polio vaccine was declared safe and effective, the news was met with jubilant celebration. Church bells rang across the nation, and factories blew their whistles. “Polio routed!” newspaper headlines exclaimed. “An historic victory,” “monumental,” “sensational,” newscasters declared. People erupted with joy across the United States. Some danced in the streets; others wept. Kids were sent home from school to celebrate.
One might have expected the initial approval of the coronavirus vaccines to spark similar jubilation—especially after a brutal pandemic year. But that didn’t happen. Instead, the steady drumbeat of good news about the vaccines has been met with a chorus of relentless pessimism.
The problem is not that the good news isn’t being reported, or that we should throw caution to the wind just yet. It’s that neither the reporting nor the public-health messaging has reflected the truly amazing reality of these vaccines. There is nothing wrong with realism and caution, but effective communication requires a sense of proportion—distinguishing between due alarm and alarmism; warranted, measured caution and doombait; worst-case scenarios and claims of impending catastrophe. We need to be able to celebrate profoundly positive news while noting the work that still lies ahead. However, instead of balanced optimism since the launch of the vaccines, the public has been offered a lot of misguided fretting over new virus variants, subjected to misleading debates about the inferiority of certain vaccines, and presented with long lists of things vaccinated people still cannot do, while media outlets wonder whether the pandemic will ever end.
This pessimism is sapping people of energy to get through the winter, and the rest of this pandemic. Anti-vaccination groups and those opposing the current public-health measures have been vigorously amplifying the pessimistic messages—especially the idea that getting vaccinated doesn’t mean being able to do more—telling their audiences that there is no point in compliance, or in eventual vaccination, because it will not lead to any positive changes. They are using the moment and the messaging to deepen mistrust of public-health authorities, accusing them of moving the goalposts and implying that we’re being conned. Either the vaccines aren’t as good as claimed, they suggest, or the real goal of pandemic-safety measures is to control the public, not the virus.
Five key fallacies and pitfalls have affected public-health messaging, as well as media coverage, and have played an outsize role in derailing an effective pandemic response. These problems were deepened by the ways that we—the public—developed to cope with a dreadful situation under great uncertainty. And now, even as vaccines offer brilliant hope, and even though, at least in the United States, we no longer have to deal with the problem of a misinformer in chief, some officials and media outlets are repeating many of the same mistakes in handling the vaccine rollout.
The pandemic has given us an unwelcome societal stress test, revealing the cracks and weaknesses in our institutions and our systems. Some of these are common to many contemporary problems, including political dysfunction and the way our public sphere operates. Others are more particular, though not exclusive, to the current challenge—including a gap between how academic research operates and how the public understands that research, and the ways in which the psychology of coping with the pandemic have distorted our response to it.
Recognizing all these dynamics is important, not only for seeing us through this pandemic—yes, it is going to end—but also to understand how our society functions, and how it fails. We need to start shoring up our defenses, not just against future pandemics but against all the myriad challenges we face—political, environmental, societal, and technological. None of these problems is impossible to remedy, but first we have to acknowledge them and start working to fix them—and we’re running out of time.
The past 12 months were incredibly challenging for almost everyone. Public-health officials were fighting a devastating pandemic and, at least in this country, an administration hell-bent on undermining them. The World Health Organization was not structured or funded for independence or agility, but still worked hard to contain the disease. Many researchers and experts noted the absence of timely and trustworthy guidelines from authorities, and tried to fill the void by communicating their findings directly to the public on social media. Reporters tried to keep the public informed under time and knowledge constraints, which were made more severe by the worsening media landscape. And the rest of us were trying to survive as best we could, looking for guidance where we could, and sharing information when we could, but always under difficult, murky conditions.
Despite all these good intentions, much of the public-health messaging has been profoundly counterproductive. In five specific ways, the assumptions made by public officials, the choices made by traditional media, the way our digital public sphere operates, and communication patterns between academic communities and the public proved flawed.
One of the most important problems undermining the pandemic response has been the mistrust and paternalism that some public-health agencies and experts have exhibited toward the public. A key reason for this stance seems to be that some experts feared that people would respond to something that increased their safety—such as masks, rapid tests, or vaccines—by behaving recklessly. They worried that a heightened sense of safety would lead members of the public to take risks that would not just undermine any gains, but reverse them.
The theory that things that improve our safety might provide a false sense of security and lead to reckless behavior is attractive—it’s contrarian and clever, and fits the “here’s something surprising we smart folks thought about” mold that appeals to, well, people who think of themselves as smart. Unsurprisingly, such fears have greeted efforts to persuade the public to adopt almost every advance in safety, including seat belts, helmets, and condoms.
But time and again, the numbers tell a different story: Even if safety improvements cause a few people to behave recklessly, the benefitsoverwhelmthe ill effects. In any case, most people are already interested in staying safe from a dangerous pathogen. Further, even at the beginning of the pandemic, sociological theory predictedthat wearing masks would be associated with increased adherence to other precautionary measures—people interested in staying safe are interested in staying safe—and empirical research quickly confirmedexactly that. Unfortunately, though, the theory of risk compensation—and its implicit assumptions—continue to haunt our approach, in part because there hasn’t been a reckoning with the initial missteps.
Rules in Place of Mechanisms and Intuitions
Much of the public messaging focused on offering a series of clear rules to ordinary people, instead of explaining in detail the mechanisms of viral transmission for this pathogen. A focus on explaining transmission mechanisms, and updating our understanding over time, would have helped empower people to make informed calculations about risk in different settings. Instead, both the CDC and the WHO chose to offer fixed guidelines that lent a false sense of precision.
In the United States, the public was initially told that “close contact” meant coming within six feet of an infected individual, for 15 minutes or more. This messaging led to ridiculous gaming of the rules; some establishments moved people around at the 14th minute to avoid passing the threshold. It also led to situations in which people working indoors with others, but just outside the cutoff of six feet, felt that they could take their mask off. None of this made any practical sense. What happened at minute 16? Was seven feet okay? Faux precision isn’t more informative; it’s misleading.
All of this was complicated by the fact that key public-health agencies like the CDC and the WHO were late to acknowledge the importance of some key infection mechanisms, such as aerosol transmission. Even when they did so, the shift happened without a proportional change in the guidelines or the messaging—it was easy for the general public to miss its significance.
Frustrated by the lack of public communication from health authorities, I wrote an article last July on what we then knew about the transmission of this pathogen—including how it could be spread via aerosols that can float and accumulate, especially in poorly ventilated indoor spaces. To this day, I’m contacted by people who describe workplaces that are following the formal guidelines, but in ways that defy reason: They’ve installed plexiglass, but barred workers from opening their windows; they’ve mandated masks, but only when workers are within six feet of one another, while permitting them to be taken off indoors during breaks.
Perhaps worst of all, our messaging and guidelines elided the difference between outdoor and indoor spaces, where, given the importance of aerosol transmission, the same precautions should not apply. This is especially important because this pathogen is overdispersed: Much of the spread is driven by a few people infecting many others at once, while most people do not transmit the virus at all.
After I wrote an article explaining how overdispersion and super-spreading were driving the pandemic, I discovered that this mechanism had also been poorly explained. I was inundated by messages from people, including elected officials around the world, saying they had no idea that this was the case. None of it was secret—numerous academic papers and articles had been written about it—but it had not been integrated into our messaging or our guidelines despite its great importance.
Crucially, super-spreading isn’t equally distributed; poorly ventilated indoor spaces can facilitate the spread of the virus over longer distances, and in shorter periods of time, than the guidelines suggested, and help fuel the pandemic.
Outdoors? It’s the opposite.
There is a solid scientific reason for the fact that there are relatively few documented cases of transmission outdoors, even after a year of epidemiological work: The open air dilutes the virus very quickly, and the sun helps deactivate it, providing further protection. And super-spreading—the biggest driver of the pandemic— appears to be an exclusively indoor phenomenon. I’ve been tracking every report I can find for the past year, and have yet to find a confirmed super-spreading event that occurred solely outdoors. Such events might well have taken place, but if the risk were great enough to justify altering our lives, I would expect at least a few to have been documented by now.
And yet our guidelines do not reflect these differences, and our messaging has not helped people understand these facts so that they can make better choices. I published my first article pleading for parks to be kept open on April 7, 2020—but outdoor activities are still banned by some authorities today, a full year after this dreaded virus began to spread globally.
We’d have been much better off if we gave people a realistic intuition about this virus’s transmission mechanisms. Our public guidelines should have been more like Japan’s, which emphasize avoiding the three C’s—closed spaces, crowded places, and close contact—that are driving the pandemic.
Scolding and Shaming
Throughout the past year, traditional and social media have been caught up in a cycle of shaming—made worse by being so unscientific and misguided. How dare you go to the beach? newspapers have scolded us for months, despite lacking evidence that this posed any significant threat to public health. It wasn’t just talk: Many cities closed parks and outdoor recreational spaces, even as they kept open indoor dining and gyms. Just this month, UC Berkeley and the University of Massachusetts at Amherst both banned students from taking even solitary walks outdoors.
Even when authorities relax the rules a bit, they do not always follow through in a sensible manner. In the United Kingdom, after some locales finally started allowing children to play on playgrounds—something that was already way overdue—they quickly ruled that parents must not socialize while their kids have a normal moment. Why not? Who knows?
On social media, meanwhile, pictures of people outdoors without masks draw reprimands, insults, and confident predictions of super-spreading—and yet few note when super-spreading fails to follow.
While visible but low-risk activities attract the scolds, other actual risks—in workplaces and crowded households, exacerbated by the lack of testing or paid sick leave—are not as easily accessible to photographers. Stefan Baral, an associate epidemiology professor at the Johns Hopkins Bloomberg School of Public Health, says that it’s almost as if we’ve “designed a public-health response most suitable for higher-income” groups and the “Twitter generation”—stay home; have your groceries delivered; focus on the behaviors you can photograph and shame online—rather than provide the support and conditionsnecessary for more people to keep themselves safe.
And the viral videos shaming people for failing to take sensible precautions, such as wearing masks indoors, do not necessarily help. For one thing, fretting over the occasional person throwing a tantrum while going unmasked in a supermarket distorts the reality: Most of the public has been complying with mask wearing. Worse, shaming is often an ineffective way of getting people to change their behavior, and it entrenches polarization and discourages disclosure, making it harder to fight the virus. Instead, we should be emphasizing safer behavior and stressing how many people are doing their part, while encouraging others to do the same.
Amidst all the mistrust and the scolding, a crucial public-health concept fell by the wayside. Harm reduction is the recognition that if there is an unmet and yet crucial human need, we cannot simply wish it away; we need to advise people on how to do what they seek to do more safely. Risk can never be completely eliminated; life requires more than futile attempts to bring risk down to zero. Pretending we can will away complexities and trade-offs with absolutism is counterproductive. Consider abstinence-only education: Not letting teenagers know about ways to have safer sex results in more of them having sex with no protections.
As Julia Marcus, an epidemiologist and associate professor at Harvard Medical School, told me, “When officials assume that risks can be easily eliminated, they might neglect the other things that matter to people: staying fed and housed, being close to loved ones, or just enjoying their lives. Public health works best when it helps people find safer ways to get what they need and want.””
Another problem with absolutism is the “abstinence violation” effect, Joshua Barocas, an assistant professor at the Boston University School of Medicine and Infectious Diseases, told me. When we set perfection as the only option, it can cause people who fall short of that standard in one small, particular way to decide that they’ve already failed, and might as well give up entirely. Most people who have attempted a diet or a new exercise regimen are familiar with this psychological state. The better approach is encouraging risk reduction and layered mitigation—emphasizing that every little bit helps—while also recognizing that a risk-free life is neither possible nor desirable.
Socializing is not a luxury—kids need to play with one another, and adults need to interact. Your kids can play together outdoors, and outdoor time is the best chance to catch up with your neighbors is not just a sensible message; it’s a way to decrease transmission risks. Some kids will play and some adults will socialize no matter what the scolds say or public-health officials decree, and they’ll do it indoors, out of sight of the scolding.
And if they don’t? Then kids will be deprived of an essential activity, and adults will be deprived of human companionship. Socializing is perhaps the most important predictor of health and longevity, after not smoking and perhaps exercise and a healthy diet. We need to help people socialize more safely, not encourage them to stop socializing entirely.
The Balance Between Knowledge And Action
Last but not least, the pandemic response has been distorted by a poor balance between knowledge, risk, certainty, and action.
Sometimes, public-health authorities insisted that we did not know enough to act, when the preponderance of evidence already justified precautionary action. Wearing masks, for example, posed few downsides, and held the prospect of mitigating the exponential threat we faced. The wait for certainty hampered our response to airborne transmission, even though there was almost no evidence for—and increasing evidence against—the importance of fomites, or objects that can carry infection. And yet, we emphasized the risk of surface transmission while refusing to properly address the risk of airborne transmission, despite increasing evidence. The difference lay not in the level of evidence and scientific support for either theory—which, if anything, quickly tilted in favor of airborne transmission, and not fomites, being crucial—but in the fact that fomite transmission had been a key part of the medical canon, and airborne transmission had not.
Sometimes, experts and the public discussion failed to emphasize that we were balancing risks, as in the recurring cycles of debate over lockdowns or school openings. We should have done more to acknowledge that there were no good options, only trade-offs between different downsides. As a result, instead of recognizing the difficulty of the situation, too many people accused those on the other side of being callous and uncaring.
And sometimes, the way that academics communicate clashed with how the public constructs knowledge. In academia, publishing is the coin of the realm, and it is often done through rejecting the null hypothesis—meaning that many papers do not seek to prove something conclusively, but instead, to reject the possibility that a variable has no relationship with the effect they are measuring (beyond chance). If that sounds convoluted, it is—there are historical reasons for this methodology and big arguments within academia about its merits, but for the moment, this remains standard practice.
At crucial points during the pandemic, though, this resulted in mistranslations and fueled misunderstandings, which were further muddled by differing stances toward prior scientific knowledge and theory. Yes, we faced a novel coronavirus, but we should have started by assuming that we could make some reasonable projections from prior knowledge, while looking out for anything that might prove different. That prior experience should have made us mindful of seasonality, the key role of overdispersion, and aerosol transmission. A keen eye for what was different from the past would have alerted us earlier to the importance of presymptomatic transmission.
Thus, on January 14, 2020, the WHO stated that there was “no clear evidence of human-to-human transmission.” It should have said, “There is increasing likelihood that human-to-human transmission is taking place, but we haven’t yet proven this, because we have no access to Wuhan, China.” (Cases were already popping up around the world at that point.) Acting as if there was human-to-human transmission during the early weeks of the pandemic would have been wise and preventive.
Later that spring, WHO officials stated that there was “currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection,” producing many articles laden with panic and despair. Instead, it should have said: “We expect the immune system to function against this virus, and to provide some immunity for some period of time, but it is still hard to know specifics because it is so early.”
Similarly, since the vaccines were announced, too many statements have emphasized that we don’t yet know if vaccines prevent transmission. Instead, public-health authorities should have said that we have many reasons to expect, and increasing amounts of data to suggest, that vaccines will blunt infectiousness, but that we’re waiting for additional data to be more precise about it. That’s been unfortunate, because while many, many things have gone wrong during this pandemic, the vaccines are one thing that has gone very, very right.
As late as April 2020, Anthony Fauci was slammed for being too optimistic for suggesting we might plausibly have vaccines in a year to 18 months. We had vaccines much, much sooner than that: The first two vaccine trials concluded a mere eight months after the WHO declared a pandemic in March 2020.
Moreover, they have delivered spectacular results. In June 2020, the FDA said a vaccine that was merely 50 percent efficacious in preventing symptomatic COVID-19 would receive emergency approval—that such a benefit would be sufficient to justify shipping it out immediately. Just a few months after that, the trials of the Moderna and Pfizer vaccines concluded by reporting not just a stunning 95 percent efficacy, but also a complete elimination of hospitalization or death among the vaccinated. Even severe disease was practically gone: The lone case classified as “severe” among 30,000 vaccinated individuals in the trials was so mild that the patient needed no medical care, and her case would not have been considered severe if her oxygen saturation had been a single percent higher.
These are exhilarating developments, because global, widespread, and rapid vaccination is our way out of this pandemic. Vaccines that drastically reduce hospitalizations and deaths, and that diminish even severe disease to a rare event, are the closest things we have had in this pandemic to a miracle—though of course they are the product of scientific research, creativity, and hard work. They are going to be the panacea and the endgame.
And yet, two months into an accelerating vaccination campaign in the United States, it would be hard to blame people if they missed the news that things are getting better.
Yes, there are new variants of the virus, which may eventually require booster shots, but at least so far, the existing vaccines are standing up to them well—very, very well. Manufacturers are already working on new vaccines or variant-focused booster versions, in case they prove necessary, and the authorizing agencies are ready for a quick turnaround if and when updates are needed. Reports from places that have vaccinated large numbers of individuals, and even trials in places where variants are widespread, are exceedingly encouraging, with dramatic reductions in cases and, crucially, hospitalizations and deaths among the vaccinated. Global equity and access to vaccines remain crucial concerns, but the supply is increasing.
Here in the United States, despite the rocky rollout and the need to smooth access and ensure equity, it’s become clear that toward the end of spring 2021, supply will be more than sufficient. It may sound hard to believe today, as many who are desperate for vaccinations await their turn, but in the near future, we may have to discuss what to do with excess doses.
So why isn’t this story more widely appreciated?
Part of the problem with the vaccines was the timing—the trials concluded immediately after the U.S. election, and their results got overshadowed in the weeks of political turmoil. The first, modest headline announcing the Pfizer-BioNTech results in The New York Times was a single column, “Vaccine Is Over 90% Effective, Pfizer’s Early Data Says,” below a banner headline spanning the page: “BIDEN CALLS FOR UNITED FRONT AS VIRUS RAGES.” That was both understandable—the nation was weary—and a loss for the public.
Just a few days later, Moderna reported a similar 94.5 percent efficacy. If anything, that provided even more cause for celebration, because it confirmed that the stunning numbers coming out of Pfizer weren’t a fluke. But, still amid the political turmoil, the Moderna report got a mere two columns on The New York Times’ front page with an equally modest headline: “Another Vaccine Appears to Work Against the Virus.”
So we didn’t get our initial vaccine jubilation.
But as soon as we began vaccinating people, articles started warning the newly vaccinated about all they could not do. “COVID-19 Vaccine Doesn’t Mean You Can Party Like It’s 1999,” one headline admonished. And the buzzkill has continued right up to the present. “You’re fully vaccinated against the coronavirus—now what? Don’t expect to shed your mask and get back to normal activities right away,” began a recent Associated Press story.
People might well want to party after being vaccinated. Those shots will expand what we can do, first in our private lives and among other vaccinated people, and then, gradually, in our public lives as well. But once again, the authorities and the media seem more worried about potentially reckless behavior among the vaccinated, and about telling them what not to do, than with providing nuanced guidance reflecting trade-offs, uncertainty, and a recognition that vaccination can change behavior. No guideline can cover every situation, but careful, accurate, and updated information can empower everyone.
Take the messaging and public conversation around transmission risks from vaccinated people. It is, of course, important to be alert to such considerations: Many vaccines are “leaky” in that they prevent disease or severe disease, but not infection and transmission. In fact, completely blocking all infection—what’s often called “sterilizing immunity”—is a difficult goal, and something even many highly effective vaccines don’t attain, but that doesn’t stop them from being extremely useful.
As Paul Sax, an infectious-disease doctor at Boston’s Brigham & Women’s Hospital, put it in early December, it would be enormously surprising “if these highly effective vaccines didn’t also make people less likely to transmit.” From multiple studies, we already knew that asymptomatic individuals—those who never developed COVID-19 despite being infected—were much less likely to transmit the virus. The vaccine trials were reporting 95 percent reductions in any form of symptomatic disease. In December, we learned that Moderna had swabbed some portion of trial participants to detect asymptomatic, silent infections, and found an almost two-thirds reduction even in such cases. The good news kept pouring in. Multiple studies found that, even in those few cases where breakthrough disease occurred in vaccinated people, their viral loads were lower—which correlates with lower rates of transmission. Data from vaccinated populations further confirmed what many experts expected all along: Of course these vaccines reduce transmission.
What went wrong? The same thing that’s going wrong right now with the reporting on whether vaccines will protect recipients against the new viral variants. Some outlets emphasize the worst or misinterpret the research. Some public-health officials are wary of encouraging the relaxation of any precautions. Some prominent experts on social media—even those with seemingly solid credentials—tend to respond to everything with alarm and sirens. So the message that got heard was that vaccines will not prevent transmission, or that they won’t work against new variants, or that we don’t know if they will. What the public needs to hear, though, is that based on existing data, we expect them to work fairly well—but we’ll learn more about precisely how effective they’ll be over time, and that tweaks may make them even better.
A year into the pandemic, we’re still repeating the same mistakes.
The top-down messaging is not the only problem. The scolding, the strictness, the inability to discuss trade-offs, and the accusations of not caring about people dying not only have an enthusiastic audience, but portions of the public engage in these behaviors themselves. Maybe that’s partly because proclaiming the importance of individual actions makes us feel as if we are in the driver’s seat, despite all the uncertainty.
Psychologists talk about the “locus of control”—the strength of belief in control over your own destiny. They distinguish between people with more of an internal-control orientation—who believe that they are the primary actors—and those with an external one, who believe that society, fate, and other factors beyond their control greatly influence what happens to us. This focus on individual control goes along with something called the “fundamental attribution error”—when bad things happen to other people, we’re more likely to believe that they are personally at fault, but when they happen to us, we are more likely to blame the situation and circumstances beyond our control.
An individualistic locus of control is forged in the U.S. mythos—that we are a nation of strivers and people who pull ourselves up by our bootstraps. An internal-control orientation isn’t necessarily negative; it can facilitate resilience, rather than fatalism, by shifting the focus to what we can do as individuals even as things fall apart around us. This orientation seems to be common among children who not only survive but sometimes thrive in terrible situations—they take charge and have a go at it, and with some luck, pull through. It is probably even more attractive to educated, well-off people who feel that they have succeeded through their own actions.
You can see the attraction of an individualized, internal locus of control in a pandemic, as a pathogen without a cure spreads globally, interrupts our lives, makes us sick, and could prove fatal.
There have been very few things we could do at an individual level to reduce our risk beyond wearing masks, distancing, and disinfecting. The desire to exercise personal control against an invisible, pervasive enemy is likely why we’ve continued to emphasize scrubbing and cleaning surfaces, in what’s appropriately called “hygiene theater,” long after it became clear that fomites were not a key driver of the pandemic. Obsessive cleaning gave us something to do, and we weren’t about to give it up, even if it turned out to be useless. No wonder there was so much focus on telling others to stay home—even though it’s not a choice available to those who cannot work remotely—and so much scolding of those who dared to socialize or enjoy a moment outdoors.
And perhaps it was too much to expect a nation unwilling to release its tight grip on the bottle of bleach to greet the arrival of vaccines—however spectacular—by imagining the day we might start to let go of our masks.
The focus on individual actions has had its upsides, but it has also led to a sizable portion of pandemic victims being erased from public conversation. If our own actions drive everything, then some other individuals must be to blame when things go wrong for them. And throughout this pandemic, the mantra many of us kept repeating—“Wear a mask, stay home; wear a mask, stay home”—hid many of the real victims.
Study after study, in country after country, confirms that this disease has disproportionately hit the poor and minority groups, along with the elderly, who are particularly vulnerable to severe disease. Even among the elderly, though, those who are wealthier and enjoy greater access to health care have fared better.
The poor and minority groups are dying in disproportionately large numbers for the same reasons that they suffer from many other diseases: a lifetime of disadvantages, lack of access to health care, inferior working conditions, unsafe housing, and limited financial resources.
Many lacked the option of staying home precisely because they were working hard to enable others to do what they could not, by packing boxes, delivering groceries, producing food. And even those who could stay home faced other problems born of inequality: Crowded housing is associatedwith higher rates of COVID-19 infection and worse outcomes, likely because many of the essential workers who live in such housing bring the virus home to elderly relatives.
Individual responsibility certainly had a large role to play in fighting the pandemic, but many victims had little choice in what happened to them. By disproportionately focusing on individual choices, not only did we hide the real problem, but we failed to do more to provide safe working and living conditions for everyone.
For example, there has been a lot of consternation about indoor dining, an activity I certainly wouldn’t recommend. But even takeout and delivery can impose a terrible cost: One study of California found that line cooks are the highest-risk occupation for dying of COVID-19. Unless we provide restaurants with funds so they can stay closed, or provide restaurant workers with high-filtration masks, better ventilation, paid sick leave, frequent rapid testing, and other protections so that they can safely work, getting food to go can simply shift the risk to the most vulnerable. Unsafe workplaces may be low on our agenda, but they do pose a real danger. Bill Hanage, associate professor of epidemiology at Harvard, pointed me to a paper he co-authored: Workplace-safety complaints to OSHA—which oversees occupational-safety regulations—during the pandemic were predictive of increases in deaths 16 days later.
New data highlight the terrible toll of inequality: Life expectancy has decreased dramatically over the past year, with Black people losing the most from this disease, followed by members of the Hispanic community. Minorities are also more likely to die of COVID-19 at a younger age. But when the new CDC director, Rochelle Walensky, noted this terrible statistic, she immediately followed up by urging people to “continue to use proven prevention steps to slow the spread—wear a well-fitting mask, stay 6 ft away from those you do not live with, avoid crowds and poorly ventilated places, and wash hands often.”
Those recommendations aren’t wrong, but they are incomplete. None of these individual acts do enough to protect those to whom such choices aren’t available—and the CDC has yet to issue sufficient guidelines for workplace ventilation or to make higher-filtration masks mandatory, or even available, for essential workers. Nor are these proscriptions paired frequently enough with prescriptions: Socialize outdoors, keep parks open, and let children play with one another outdoors.
Vaccines are the tool that will end the pandemic. The story of their rollout combines some of our strengths and our weaknesses, revealing the limitations of the way we think and evaluate evidence, provide guidelines, and absorb and react to an uncertain and difficult situation.
But also, after a weary year, maybe it’s hard for everyone—including scientists, journalists, and public-health officials—to imagine the end, to have hope. We adjust to new conditions fairly quickly, even terrible new conditions. During this pandemic, we’ve adjusted to things many of us never thought were possible. Billions of people have led dramatically smaller, circumscribed lives, and dealt with closed schools, the inability to see loved ones, the loss of jobs, the absence of communal activities, and the threat and reality of illness and death.
Hope nourishes us during the worst times, but it is also dangerous. It upsets the delicate balance of survival—where we stop hoping and focus on getting by—and opens us up to crushing disappointment if things don’t pan out. After a terrible year, many things are understandably making it harder for us to dare to hope. But, especially in the United States, everything looks better by the day. Tragically, at least 28 million Americans have been confirmed to have been infected, but the real number is certainly much higher. By one estimate, as many as 80 million have already been infected with COVID-19, and many of those people now have some level of immunity. Another 46 million people have already received at least one dose of a vaccine, and we’re vaccinating millions more each day as the supply constraints ease. The vaccines are poised to reduce or nearly eliminate the things we worry most about—severe disease, hospitalization, and death.
Not all our problems are solved. We need to get through the next few months, as we race to vaccinate against more transmissible variants. We need to do more to address equity in the United States—because it is the right thing to do, and because failing to vaccinate the highest-risk people will slow the population impact. We need to make sure that vaccines don’t remain inaccessible to poorer countries. We need to keep up our epidemiological surveillance so that if we do notice something that looks like it may threaten our progress, we can respond swiftly.
And the public behavior of the vaccinated cannot change overnight—even if they are at much lower risk, it’s not reasonable to expect a grocery store to try to verify who’s vaccinated, or to have two classes of people with different rules. For now, it’s courteous and prudent for everyone to obey the same guidelines in many public places. Still, vaccinated people can feel more confident in doing things they may have avoided, just in case—getting a haircut, taking a trip to see a loved one, browsing for nonessential purchases in a store.
But it is time to imagine a better future, not just because it’s drawing nearer but because that’s how we get through what remains and keep our guard up as necessary. It’s also realistic—reflecting the genuine increased safety for the vaccinated.
Public-health agencies should immediately start providing expanded information to vaccinated people so they can make informed decisions about private behavior. This is justified by the encouraging data, and a great way to get the word out on how wonderful these vaccines really are. The delay itself has great human costs, especially for those among the elderly who have been isolated for so long.
Public-health authorities should also be louder and more explicit about the next steps, giving us guidelines for when we can expect easing in rules for public behavior as well. We need the exit strategy spelled out—but with graduated, targeted measures rather than a one-size-fits-all message. We need to let people know that getting a vaccine will almost immediately change their lives for the better, and why, and also when and how increased vaccination will change more than their individual risks and opportunities, and see us out of this pandemic.
We should encourage people to dream about the end of this pandemic by talking about it more, and more concretely: the numbers, hows, and whys. Offering clear guidance on how this will end can help strengthen people’s resolve to endure whatever is necessary for the moment—even if they are still unvaccinated—by building warranted and realistic anticipation of the pandemic’s end.
Hope will get us through this. And one day soon, you’ll be able to hop off the subway on your way to a concert, pick up a newspaper, and find the triumphant headline: “COVID Routed!”
Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.
Antes encaradas com desconfiança pela comunidade científica, as metodologias de intervenção artificial no meio ambiente com o objetivo de frear os efeitos devastadores do aquecimento global estão sendo consideradas agora como recursos a serem aplicados em última instância (já que iniciativas para reduzir a emissão de gases dependem diretamente da ação coletiva e demandam décadas para que tenham algum tipo de efeito benéfico). É possível que não tenhamos esse tempo, de acordo com alguns pesquisadores da área, os quais têm atraído investimentos e muita atenção.
Fazendo parte de um campo também referenciado como geoengenharia solar, grande parte dos métodos se vale da emissão controlada de partículas na atmosfera, responsáveis por barrar a energia recebida pelo nosso planeta e direcioná-la novamente ao espaço, criando uma espécie de resfriamento semelhante ao gerado por erupções vulcânicas.
Ainda que não atuem sobre a poluição, por exemplo, cientistas consideram que, diante de tempestades cada vez mais agressivas, tornados de fogo, inundações e outros desastres naturais, tais ações seriam interessantes enquanto soluções mais eficazes não são desenvolvidas.
Diretor do Sabin Center for Climate Change Law, na Columbia Law School, e editor de um livro sobre a tecnologia e suas implicações legais, Michael Gerrard exemplificou a situação em entrevista ao The New York Times: “Estamos enfrentando uma ameaça existencial. Por isso, é necessário que analisemos todas as opções”.
“Gosto de comparar a geoengenharia a uma quimioterapia para o planeta: se todo o resto estiver falhando, resta apenas tentar”, ele defendeu.
Desastres naturais ocasionados pelo aquecimento global tornam urgente a ação de intervenções, segundo pesquisadores. Fonte: Unsplash
Dois pesos e duas medidas
Entre aquelas que se destacam, pode ser citada a ação empreendida por uma organização não governamental chamada SilverLining, que concedeu US$ 3 milhões a diversas universidades e outras instituições para que se dediquem à busca de respostas para questões práticas. Um exemplo é encontrar a altitude ideal para a aplicação de aerossóis e como inserir a quantidade mais indicada, verificando seus efeitos sobre a cadeia de produção de alimentos mundial.
Chris Sacca, cofundador da Lowercarbon Capital, um grupo de investimentos que é um dos financiadores da SilverLining, declarou em tom alarmista: “A descarbonização é necessária, mas vai demorar 20 anos ou mais para que ocorra. Se não explorarmos intervenções climáticas como a reflexão solar neste momento, condenaremos um número incontável de vidas, espécies e ecossistemas ao calor”.
Outra contemplada por somas substanciais foi a National Oceanic and Atmospheric Administration, que recebeu do congresso norte-americano US$ 4 milhões justamente para o desenvolvimento de tecnologias do tipo, assim como o monitoramento de uso secreto de tais soluções por outros países.
Douglas MacMartin, pesquisador de Engenharia Mecânica e aeroespacial na Universidade Cornell, afirmou que “é certo o poder da humanidade de resfriar as coisas, mas o que não está claro é o que vem a seguir”.
Se, por um lado, o planeta pode ser resfriado artificialmente; por outro, não se sabe o que virá. Fonte: Unsplash
Existe uma maneira
Para esclarecer as possíveis consequências de intervenções dessa magnitude, MacMartin desenvolverá modelos de efeitos climáticos específicos oriundos da injeção de aerossóis na atmosfera acima de diferentes partes do globo e altitudes. “Dependendo de onde você colocar [a substância], terá efeitos diferentes nas monções na Ásia e no gelo marinho do Ártico“, ele apontou.
O Centro Nacional de Pesquisa Atmosférica em Boulder, Colorado, financiado também pela SilverLining, acredita ter o sistema ideal para isso — o qual é considerado o mais sofisticado do mundo. Com ele, serão executadas centenas de simulações e, assim, especialistas procurarão o que chamam de ponto ideal, no qual a quantidade de resfriamento artificial que pode reduzir eventos climáticos extremos não cause mudanças mais amplas nos padrões regionais de precipitação ou impactos semelhantes.
“Existe uma maneira, pelo menos em nosso modelo de mundo, de ver se podemos alcançar um sem acionar demais o outro?” questionou Jean-François Lamarque, diretor do laboratório de Clima e Dinâmica Global da instituição. Ainda não há resposta para essa dúvida, mas soluções sustentáveis estão sendo analisadas por pesquisadores australianos, que utilizariam a emissão de água salgada para tornar nuvens mais reflexivas, assim indicando resultados promissores de testes.
Dessa maneira, quem sabe as perdas de corais de recife que testemunhamos tenham data para acabar. Quanto ao resto, bem, só o tempo mostrará.
Science has taken center stage during the COVID-19 pandemic. Early on, as SARS-CoV-2 started spreading around the globe, many researchers pivoted to focus on studying the virus. At the same time, some scientists and science advisors—experts responsible for providing scientific information to policymakers—gained celebrity status as they calmly and cautiously updated the public on the rapidly evolving situation and lent their expertise to help governments make critical decisions, such as those relating to lockdowns and other transmission-slowing measures.
“Academia, in the case of COVID, has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible,” says Chris Tyler, the director of research and policy in University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP).
But the pace at which COVID-related science has been conducted and disseminated during the pandemic has also revealed the challenges associated with translating fast-accumulating evidence for an audience not well versed in the process of science. As research findings are speedily posted to preprint servers, preliminary results have made headlines in major news outlets, sometimes without the appropriate dose of scrutiny.
Some politicians, such as Brazil’s President Jair Bolsonaro, have been quick to jump on premature findings, publicly touting the benefits of treatments such as hydroxychloroquine with minimal or no supporting evidence. Others have pointed to the flip-flopping of the current state of knowledge as a sign of scientists’ untrustworthiness or incompetence—as was seen, for example, in the backlash against Anthony Fauci, one of the US government’s top science advisors.
Some comments from world leaders have been even more concerning. “For me, the most shocking thing I saw,” Tyler says, “was Donald Trump suggesting the injection of disinfectant as a way of treating COVID—that was an eye-popping, mind-boggling moment.”
Still, Tyler notes that there are many countries in which the relationship between the scientific community and policymakers during the course of the pandemic has been “pretty impressive.” As an example, he points to Germany, where the government has both enlisted and heeded the advice of scientists across a range of disciplines, including epidemiology, virology, economics, public health, and the humanities.
Researchers will likely be assessing the response to the pandemic for years to come. In the meantime, for scientists interested in getting involved in policymaking, there are lessons to be learned, as well some preliminary insights from the pandemic that may help to improve interactions between scientists and policymakers and thereby pave the way to better evidence-based policy.
Cultural divisions between scientists and policymakers
Even in the absence of a public-health emergency, there are several obstacles to the smooth implementation of scientific advice into policy. One is simply that scientists and policymakers are generally beholden to different incentive systems. “Classically, a scientist wants to understand something for the sake of understanding, because they have a passion toward that topic—so discovery is driven by the value of discovery,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Whereas the policymaker has a much more utilitarian approach. . . . They have to come up with interventions that produce the best outcomes for the most people.”
Scientists and policymakers are operating on considerably different timescales, too. “Normally, research programs take months and years, whereas policy decisions take weeks and months, sometimes days,” Tyler says. “This discrepancy makes it much more difficult to get scientifically generated knowledge into the policymaking process.” Tyler adds that the two groups deal with uncertainty in very different ways: academics are comfortable with it, as measuring uncertainty is part of the scientific process, whereas policymakers tend to view it as something that can cloud what a “right” answer might be.
This cultural mismatch has been particularly pronounced during the COVID-19 pandemic. Even as scientists work at breakneck speeds, many crucial questions about COVID-19—such as how long immunity to the virus lasts, and how much of a role children play in the spread of infection—remain unresolved, and policy decisions have had to be addressed with limited evidence, with advice changing as new research emerges.
“We have seen the messy side of science, [that] not all studies are equally well-done and that they build over time to contribute to the weight of knowledge,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. “The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions.”
Academia has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible. —Chris Tyler, University College London
Widespread mask use, for example, was initially discouraged by many politicians and public health officials due to concerns about a shortage of supplies for healthcare workers and limited data on whether mask use by the general public would help reduce the spread of the virus. At the time, there were few mask-wearing laws outside of East Asia, where such practices were commonplace long before the COVID-19 pandemic began.
Gradually, however, as studies began to provide evidence to support the use of face coverings as a means of stemming transmission, scientists and public health officials started to recommend their use. This shift led local, state, and federal officials around the world to implement mandatory mask-wearing rules in certain public spaces. Some politicians, however, used this about-face in advice as a reason to criticize health experts.
“We’re dealing with evidence that is changing very rapidly,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “I think there’s a risk of people perceiving that rapid evolution as science [being] a bad process, which is worrisome.” On the other hand, the spotlight the pandemic has put on scientists provides opportunities to educate the general public and policymakers about the scientific process, Azad adds. It’s important to help them understand that “it’s good that things are changing, because it means we’re paying attention to the new evidence as it comes out.”
Bringing science and policy closer together
Despite these challenges, science and policy experts say that there are both short- and long-term ways to improve the relationship between the two communities and to help policymakers arrive at decisions that are more evidence-based.
Better tools, for one, could help close the gap. Earlier this year, Ruggeri brought together a group of people from a range of disciplines, including medicine, engineering, economics, and policy, to develop the Theoretical, Empirical, Applicable, Replicable, Impact (THEARI) rating system, a five-tiered framework for evaluating the robustness of scientific evidence in the context of policy decisions. The ratings range from “theoretical” (the lowest level, where a scientifically viable idea has been proposed but not tested) to “impact” (the highest level, in which a concept has been successfully tested, replicated, applied, and validated in the real world).
The team developed THEARI partly to establish a “common language” across scientific disciplines, which Ruggeri says would be particularly useful to policymakers evaluating evidence from a field they may know little about. Ruggeri hopes to see the THEARI framework—or something like it—adopted by policymakers and policy advisors, and even by journals and preprint servers. “I don’t necessarily think [THEARI] will be used right away,” he says. “It’d be great if it was, but we . . . [developed] it as kind of a starting point.”
Other approaches to improve the communication between scientists and policymakers may require more resources and time. According to Akerlof, one method could include providing better incentives for both parties to engage with each other—by offering increased funding for academics who take part in this kind of activity, for instance—and boosting opportunities for such interactions to happen.
Akerlof points to the American Association for the Advancement of Science’s Science & Technology Policy Fellowships, which place scientists and engineers in various branches of the US government for a year, as an example of a way in which important ties between the two communities could be forged. “Many of those scientists either stay in government or continue to work in science policy in other organizations,” Akerlof says. “By understanding the language and culture of both the scientific and policy communities, they are able to bridge between them.”
In Canada, such a program was established in 2018, when the Canadian Science Policy Center and Mona Nemer, Canada’s Chief Science Advisor, held the country’s first “Science Meets Parliament” event. The 28 scientists in attendance, including Azad, spent two days learning about effective communication and the policymaking process, and interacting with senators and members of parliament. “It was eye opening for me because I didn’t know how parliamentarians really live and work,” Azad says. “We hope it’ll grow and involve more scientists and continue on an annual basis . . . and also happen at the provincial level.”
The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions. —Karen Akerlof, George Mason University
There may also be insights from scientist-policymaker exchanges in other domains that experts can apply to the current pandemic. Maria Carmen Lemos, a social scientist focused on climate policy at the University of Michigan, says that one way to make those interactions more productive is by closing something she calls the “usability gap.”
“The usability gap highlights the fact that one of the reasons that research fails to connect is because [scientists] only pay attention to the [science],” Lemos explains. “We are putting everything out there in papers, in policy briefs, in reports, but rarely do we actually systematically and intentionally try to understand who is on the other side” receiving this information, and what they will do with it.
The way to deal with this usability gap, according to Lemos, is for more scientists to consult the people who actually make, influence, and implement policy changes early on in the scientific process. Lemos and her team, for example, have engaged in this way with city officials, farmers, forest managers, tribal leaders, and others whose decision making would directly benefit from their work. “We help with organization and funding, and we also work with them very closely to produce climate information that is tailored for them, for the problems that they are trying to solve,” she adds.
Azad applied this kind of approach in a study that involves assessing the effects of the pandemic on a cohort of children that her team has been following from infancy, starting in 2010. When she and her colleagues were putting together the proposal for the COVID-19 project this year, they reached out to public health decision makers across the Canadian provinces to find out what information would be most useful. “We have made sure to embed those decision makers in the project from the very beginning to ensure we’re asking the right questions, getting the most useful information, and getting it back to them in a very quick turnaround manner,” Azad says.
There will also likely be lessons to take away from the pandemic in the years to come, notes Noam Obermeister, a PhD student studying science policy at the University of Cambridge. These include insights from scientific advisors about how providing guidance to policymakers during COVID-19 compared to pre-pandemic times, and how scientists’ prominent role during the pandemic has affected how they are viewed by the public; efforts to collect this sort of information are already underway.
“I don’t think scientists anticipated that much power and visibility, or that [they] would be in [public] saying science is complicated and uncertain,” Obermeister says. “I think what that does to the authority of science in the public eye is still to be determined.”
Talking Science to PolicymakersFor academics who have never engaged with policymakers, the thought of making contact may be daunting. Researchers with experience of these interactions share their tips for success. 1. Do your homework. Policymakers usually have many different people vying for their time and attention. When you get a meeting, make sure you make the most of it. “Find out which issues related to your research are a priority for the policymaker and which decisions are on the horizon,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. 2. Get to the point, but don’t oversimplify. “I find policymakers tend to know a lot about the topics they work on, and when they don’t, they know what to ask about,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Finding a good balance in the communication goes a long way.” 3. Keep in mind that policymakers’ expertise differs from that of scientists. “Park your ego at the door and treat policymakers and their staff with respect,” Akerlof says. “Recognize that the skills, knowledge, and culture that translate to success in policy may seem very different than those in academia.” 4. Be persistent. “Don’t be discouraged if you don’t get a response immediately, or if promising communications don’t pan out,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “Policymakers are busy and their attention shifts rapidly. Meetings get cancelled. It’s not personal. Keep trying.” 5. Remember that not all policymakers are politicians, and vice versa. Politicians are usually elected and are affiliated with a political party, and they may not always be directly involved in creating new policies. This is not the case for the vast majority of policymakers—most are career civil servants whose decisions impact the daily living of constituents, Ruggeri explains.
A grant to a New York nonprofit aimed at detecting and preventing future outbreaks of coronaviruses from bats has been canceled by the National Institutes of Health, Politico reports, apparently at the direction of President Donald Trump because the research involved the Wuhan Institute of Virology in China. The virology institute has become a focal point for the idea that SARS-CoV-2 escaped from the laboratory and caused the current COVID-19 pandemic, a scenario experts say is not supported by evidence. Instead, virologists The Scientist has spoken to say the virus most likely jumped from infected animals to humans.
The grant, first awarded in fiscal year 2014 and most recently renewed last year, went to EcoHealth Alliance, which describes itself as “a global environmental health nonprofit organization dedicated to protecting wildlife and public health from the emergence of disease.” The aims of the funded project included characterizing coronaviruses present in bat populations in southern China and conducting surveillance to detect spillover events of such viruses to people. The project has resulted in 20 publications, most recently a March report on zoonotic risk factors in rural southern China.
EcoHealth Alliance’s partners on the project include researchers at the Wuhan Institute of Virology, a BSL-4 facility that has for months been a focus of conspiracy theories that SARS-CoV-2 escaped or was released from a lab. On April 14, the The Washington Post published a column highlighting State Department cables about concerns regarding safety at the institute. (Experts tell NPR that, even in light of the cables, accidental escape of the virus from a lab remains a far less likely scenario than a jump from animals.)
Then, in an April 17 White House coronavirus briefing, a reporter, whom Politico identifies as being from Newsmax, falsely stated in a question that “US intelligence is saying this week that the coronavirus likely came from a level 4 lab in Wuhan,” and that the NIH had awarded a $3.7 million grant to the Wuhan lab. “Why would the US give a grant like that to China?” she asked. “We will end that grant very quickly,” Trump said in his answer.
An NIH official then wrote to EcoHealth Alliance to inquire about money sent to “China-based participants in this work,” Politico reports, and the organization’s head, Peter Daszak, responded that a complete response would take time, but that “I can categorically state that no fund from [the grant] have been sent to the Wuhan Institute of Virology, nor has any contract been signed.” Days later, NIH notified EcoHealth Alliance that future funding for the project was canceled, and that it must immediately “stop spending the $369,819 remaining from its 2020 grant”—an unusual move generally reserved for cases of scientific misconduct or financial improprieties, according to Politico.
In a statement about the cancellation, EcoHealth Alliance says the terminated research “aimed to analyze the risk of coronavirus emergence and help in designing vaccines and drugs to protect us from COVID-19 and other coronavirus threats,” and that it addresses “all four strategic research priorities of the NIH/NIAID Strategic Plan for COVID-19 Research, released just this week.” The organization will, it says, “continue our fight against this and other emerging diseases.”
Congressional Committee tweets don’t usually get much attention. But when the House Committee on Science, Space, and Technology sent out a link to a Breitbart story claiming a “plunge” in global temperatures, people took notice. The takedowns flew in, from Slateand Bernie Sanders, from plenty of scientists, and most notably from the Weather Channel, which deemed Breitbart’s use of their meteorologist’s face worthy of a point-by-point debunking video.
There is nothing particularly noteworthy about Breitbart screwing up climate science, but the House Science Committee is among the most important scientific oversight bodies in the country. Since Texas Republican Lamar Smith took over its leadership in 2012, the Committee has spiraled down an increasingly anti-science rabbit hole: absurd hearings aimed at debunking consensus on global warming, outright witch hunts using the Committee’s subpoena power to intimidate scientists, and a Republican membership that includes some of the most anti-science lawmakers in the land.
The GOP’s shenanigans get the headlines, but what about the other side of the aisle? What is it like to be a member of Congress and sit on a science committee that doesn’t seem to understand science? What is it like to be an adult in a room full of toddlers? I asked some of the adults.
“I think it’s completely embarrassing,” said Mark Veasey, who represents Texas’s 33rd district, including parts of Dallas and Fort Worth. “You’re talking about something that 99.9 percent—if not 100 percent—of people in the legitimate science community says is a threat….To quote Breitbart over some of the most brilliant people in the world—and those are American scientists—and how they see climate change, I just think it’s a total embarrassment.”
Paul Tonko, who represents a chunk of upstate New York that includes Albany, has also called it embarrassing. “It is frustrating when you have the majority party of a committee pushing junk science and disproven myths to serve a political agenda,” he said. “It’s not just beneath the dignity of the Science Committee or Congress as a whole, it’s inherently dangerous. Science and research seek the truth—they don’t always fit so neatly with agendas.”
“I think it’s completely embarrassing.”
Suzanne Bonamici, of Oregon’s 1st District, also called it frustrating “to say the least” that the Committee “is spending time questioning climate researchers and ignoring the broad scientific consensus.” California Rep. Eric Swalwellcalled it the “Science” Committee in an email, and made sure I noted the air quotes. He said that in Obama’s first term, the Committee helped push forward on climate change and a green economy. “For the last four years, however, being on the Committee has meant defending the progress we’ve made.”
Frustration, embarrassment, a sense of Sisyphean hopelessness—this sounds like a grim gig. And Veasey also said that he doesn’t have much hope for a change in the Science Committee’s direction, because that change would have to come from the chairman. Smith has received hundreds of thousands of dollars in campaign support from the oil and gas industry over the years, and somehow finds himself in even greater climate change denial than ExxonMobil.
And of course, it isn’t just the leadership. The League of Conservation Voters maintains a scorecard of every legislator in Congress: for 2015, the most recent year available, the average of all the Democratic members on the science committee is 92.75 percent (with 100 being a perfect environment-friendly score). On the GOP side of the aisle, the average is just over three percent.
(I reached out to a smattering of GOP members of the Committee to get their take on its recent direction. None of them responded.)
Bill Foster, who represents a district including some suburbs of Chicago, is the only science PhD in all of Congress (“I very often feel lonely,” he said, before encouraging other scientists to run for office). “Since I made the transition from science into politics not so long ago, I’ve become very cognizant of the difference between scientific facts, and political facts,” he said. “Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.” Witness the 52 percent of Republicans who currently believe Trump won the popular vote, and you get the idea.
I’m not sure “climate change isn’t happening” has reached that “political fact” level, though Smith and his ilk have done their damndest. Recent polls suggest most Americans do understand the issue, and more and more they believe the government should act aggressively to tackle it.
“Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.”
That those in charge of our government disagree so publicly and strongly now has scientists terrified. “This has a high profile,” Foster said, “because if there is any committee in Congress that should operate on the basis of scientific truth, it ought to be the Science, Space, and Technology committee—so when it goes off the rails, then people notice.”
The odds of the train jumping back on the rails over the next four years appear slim. Policies that came from the Obama White House, like the Clean Power Plan, are obviously on thin ice with a Trump administration, and without any sort of check on Smith and company it is hard to say just how pro-fossil fuel, anti-climate the committee could really get.
In the face of all that, what is a sane member of Congress to do? Elizabeth Esty, who represents Connecticut’s 5th district, was among several Committee members to note that in spite of the disagreements on climate, she has managed to work with GOP leadership on other scientific issues. Rep. Swalwell said he will try and focus on bits of common ground, like the jobs that come with an expanding green economy. Rep. Veasey said his best hope is that some strong conservative voices from outside of Congress might start to make themselves heard by the Party’s upper echelons on climate and related issues.
An ugly and dire scenario, then, but the Democrats all seem to carry at least a glimmer of hope. “It’s certainly frustrating and concerning but I’m an optimist,” Esty said. “I wouldn’t run for this job if I weren’t.”
Dave Levitan is a science journalist, and author of the book Not A Scientist: How politicians mistake, misrepresent, and utterly mangle science. Find him on Twitter and at his website.
After all, that’s what we learned from the bankruptcy filings of two other major U.S. coalcompanies, Arch Coal and Alpha Natural Resources. The companies’ lists of creditors accompanying their chapter 11 bankruptcy filings both cited known climate science deniers. So far, the bankruptcy cases have not revealed the details of these financial relationships. But there is now no doubt the coal companies contracted with these groups and individuals to either make a donation or pay for services.
Recent bankruptcy filings have revealed that Chris Horner, who regularly derides climate science on Fox News Channel, has financial ties to the coal industry.
This new evidence is important at a time when coal and oil and gas companies are under increased scrutiny about their ongoing climate science disinformation campaigns. ExxonMobil, for example, currently faces state and possibly federal investigations into whether the discrepancies between what the company knew about climate science and what it told their shareholders and the public amounted to fraud.
Of course, there’s no shortage of historical evidence of the coal industry’s track record of deceiving the public about global warming. In 1991, for example, coal trade associations formed a short-lived front group called the Information Council on the Environment that ran a national public relations campaign downplaying the known risks of climate change. All through the 1990s, coal trade groups also were members of the Global Climate Coalition, an alliance of companies and business groups that disputed the findings of the U.N. Intergovernmental Panel on Climate Change (IPCC) and, later on, helped scuttle the Kyoto Protocol climate treaty. And, more recently, the American Coalition for Clean Coal Electricity paid a lobbying firm to send forged letters to members of Congress from actual nonprofit groups, including the NAACP and the American Association of University Women, espousing fabricated opposition to a 2009 climate change bill.
But such coal company connections have been harder to pin down in the current era of so-called dark money. That’s what makes the latest disclosures so noteworthy: They indicate that coal industry disinformation campaigns have continued even as the scientific evidence that burning fossil fuels is driving climate change has only become stronger.
Revealing Creditor Lists
The creditor list for Alpha Natural Resources—which filed for bankruptcy last August—indicates that the company has been especially active in supporting the denier network. As first reported by The Intercept, Alpha—the fourth largest U.S. coal company—has financial ties with a half dozen denier organizations, some which have direct links to billionaire brothers Charles and David Koch, owners of the coal, oil and gas conglomerate Koch Industries. The Koch-affiliated groups include Americans for Prosperity, the Institute for Energy Research and Freedom Partners Chamber of Commerce, a de facto Koch bank that disburses donations from anonymous, wealthy conservatives to groups that advocate rolling back public health, environmental and workplace protections.
Other Alpha creditors include the U.S. Chamber of Commerce, which questions the legitimacy of climate models; the Heartland Institute, which is probably best known for its billboard likening climate scientists to the serial killer Ted Kaczynski; and the American Legislative Exchange Council (ALEC), which convenes conferences for its state legislator members featuring speakers who distort climate science and disparage renewable energy. One of the speakers at a summer 2014 ALEC conference, for example, was Heartland Institute President Joe Bast, whose slide presentation falsely claimed: “There is no scientific consensus on the human role in climate change” and “The Intergovernmental Panel on Climate Change … is not a credible source of science or economics.”
The Alpha creditor list also includes at least two individuals with links to denier groups. Particularly noteworthy is Chris Horner, an attorney who is closely associated with a number of nonprofit denier groups, including ALEC, the Competitive Enterprise Institute (CEI), the Heartland Institute, the Energy & Environmental Legal Institute (E&E Legal), formerly the American Tradition Institute, and the Free Market Environmental Law Clinic, another Alpha creditor.
Arch Coal, the second largest U.S. coal company, listed ALEC and E&E Legal in its list of creditors when it filed for chapter 11 protection in January. Just last month, the Wall Street Journal reported that the company donated $10,000 to E&E Legal in 2014. E&E Legal’s executive director, Craig Richardson, told the Journal the contribution was for “general support.”
Chris Horner’s Coal Ties Disclosed
The exposure of Horner’s financial ties to coal companies is significant because he is a regular guest on Fox News Channel, which identifies him by his affiliation with CEI or E&E Legal but not by his connection to the coal industry.
Despite his lack of scientific expertise, Horner routinely critiques scientific findings, has called for spurious investigations of climate scientists affiliated with the IPCC and the National Aeronautics and Space Administration and has harassed scientists by filing intrusive open records requests with the universities where they work. As legal counsel for the Energy & Environmental Legal Institute and the Free Market Environmental Law Clinic—which work in tandem—Horner has targeted a number of leading climate scientists, including James Hansenand Katharine Hayhoe. Perhaps his most notorious lawsuit was against the University of Virginia to obtain emails, draft research papers, handwritten notes and other documents related to the work of Michael Mann, lead author of the famous “hockey stick” study demonstrating the link between increased fossil fuel use and rising global temperatures. The Virginia Supreme Court ultimately ruled in favor of the university and Mann, affirming the school’s right to protect the privacy of its researchers from overly broad open records requests.
According to the Wall Street Journal, Alpha paid Horner $18,600 before it declared bankruptcy. Meanwhile, the Free Market Environmental Law Clinic—an Alpha creditor—paid him $110,000 in 2014, $115,865 in 2013 and $60,449 in 2012, according to the clinic’s tax filings.
Besides Alpha and Arch Coal, Horner has ties to other coal companies. Last summer, he was a featured speaker at a private $7,500-a-person golf and fly-fishing retreat sponsored by Alpha, Arch Coal and four other coal companies: Alliance Resource Partners, Consol Energy, Drummond and United Coal. After the event—the 2015 annual Coal & Investment Leadership Forum—attendees received an email from the coal company CEOs praising Horner, according to the Center for Media and Democracy, a nonpartisan political watchdog group that first reported the connection between Arch Coal and E&E Legal. “As the ‘war on coal’ continues,” the email stated, “I trust that the commitment we have made to support Chris Horner’s work will eventually create a greater awareness of the illegal tactics being employed to pass laws that are intended to destroy our industry.”
Given the recent spate of bankruptcies, the companies’ commitment to Horner likely will create a greater awareness of something quite different: that the coal industry—along with the likes of ExxonMobil and Koch Industries—is still funding denier groups to spread disinformation about climate science and delay government action. It is time we held these companies accountable.
Na semana que passou, a Funceme atualizou a previsão para a estação de chuva, que se estende até maio na região em que o Ceará está inserido. Reafirmou, em dia de chuva intensa na Capital, probabilidade de chuva em torno de 70% abaixo da média.
Isso é seca braba. É caso de se cobrar atitude do poder público e se compromissar com mobilização social para um cenário desfavorável.
Pela primeira vez, o volume do Castanhão, principal fornecedor da água na Região Metropolitana de Fortaleza, caiu a menos de 10%.
Mas a reação, de um modo geral, se restringe ao ceticismo em relação às previsões da Funceme. Não faltam comentários pejorativos, piadas e ironias, uma espécie de cultura instaurada sempre que se trata da instituição que, além da meteorologia, se dedica a meio ambiente e recursos hídricos.
Penso que há de se atribuir essa postura a imprecisões de previsão, como de fato acontecem, ao uso político de informações como aconteceu no passado ou mesmo à ignorância. Mas me incomoda. A meteorologia lida com parâmetros globais complexos, como temperatura do ar e dos oceanos, velocidade e direção dos ventos, umidade, pressão atmosférica, fenômenos como El Niño… Já avançou consideravelmente na confiabilidade das previsões feitas por meteorologistas, com o uso de dados de satélites, balões atmosféricos e um tanto mais de aparato tecnológico que alimentam modelos matemáticos complicados para desenhar probabilidades, mas não exatidões.
Erra-se, aqui como no resto mundo. Mas geram-se informações de profundo impacto social, econômico, científico e cultural, essenciais a tomadas de decisões, de natureza pública e privada. Algo que nenhum gestor ou comunidade pode dispensar, especialmente em uma região como a nossa, vulnerável às variações climáticas e dependente da chuva. Carecemos de uma troca de mentalidade em relação ao trabalho da Funceme. Falo de respeito mesmo pelo que nos é caro e fundamentalmente necessário.
A propósito, é difícil, mas torço para que a natureza contrarie o prognóstico e caia chuva capaz de garantir um mínimo de segurança hídrica, produtividade e dignidade a um Ceará que muito depende das informações sobre o clima, geradas pela Funceme.
Lei de Biossegurança completa 10 anos dialogando com as mais recentes descobertas da ciência
Walter Colli – Instituto de Química, Universidade de São Paulo
Ao longo de 2015, uma silenciosa revolução biotecnológica aconteceu no Brasil. Neste ano a Comissão Técnica Nacional de Biossegurança (CTNBio) analisou e aprovou um número recorde de tecnologias aplicáveis à agricultura, medicina e produção de energia. O trabalho criterioso dos membros da CTNBio avaliou como seguros para a saúde humana e animal e para o ambiente 19 novos transgênicos, dentre os quais 13 plantas, três vacinas e três microrganismos ou derivados.
A CTNBio, priorizando o rigor nas análises de biossegurança e atenta às necessidades de produzir alimentos de maneira mais sustentável aprovou, no ano passado, variedades de soja, milho e algodão tolerantes a herbicidas com diferentes métodos de ação. Isso permitirá que as sementes desenvolvam todo seu potencial e que os produtores brasileiros tenham mais uma opção para a rotação de tecnologias no manejo de plantas daninhas. Sem essa ferramenta tecnológica, os agricultores ficariam reféns das limitações impostas pelas plantas invasoras. As tecnologias de resistência a insetos proporcionam benefícios semelhantes.
Na área da saúde, a revolução diz respeito aos métodos de combate a doenças que são endêmicas das regiões tropicais. Mais uma vez, mostrando-se parceira da sociedade, a CTNBio avaliou a biossegurança de duas vacinas recombinantes contra a Dengue em regime de urgência e deu parecer favorável a elas. Soma-se a estes esforços a aprovação do Aedes aegypti transgênico. O mosquito geneticamente modificado aprovado em 2014 tem se mostrado um aliado no combate ao inseto que, além de ser vetor da dengue, também está associado a casos de transmissão dos vírus Zika, Chikungunya e da febre amarela.
Nos últimos 10 anos, até o momento, o advento da nova CTNBio pela Lei 11.105 de 2005 – a Lei de Biossegurança – proporcionou a aprovação comercial de 82 Organismos Geneticamente Modificados (OGM): 52 eventos em plantas; 20 vacinas veterinárias; 7 microrganismos; 1 mosquito Aedes aegypti; e 2 vacinas para uso humano contra a Dengue. Essas liberações comerciais são a maior prova de que o Brasil lança mão da inovação para encontrar soluções para os desafios da contemporaneidade.
Entretanto, é necessário enfatizar que assuntos não relacionados com Ciência também se colocaram, como em anos anteriores, no caminho do desenvolvimento da biotecnologia em 2015. Manifestantes anti-ciência invadiram laboratórios e destruíram sete anos de pesquisas com plantas transgênicas de eucalipto e grupos anti-OGM chegaram a interromper reuniões da CTNBio, pondo abaixo portas com ações truculentas. Diversas inverdades foram publicadas na tentativa de colocar em dúvida a segurança e as contribuições que a transgenia vem dando para a sociedade. A ação desses grupos preocupa, pois, se sua ideologia for vitoriosa, tanto o progresso científico quanto o PIB brasileiros ficarão irreversivelmente prejudicados.
Hoje, a nossa Lei de Biossegurança é tida internacionalmente como um modelo de equilíbrio entre o rigor nas análises técnicas e a previsibilidade institucional necessária para haver o investimento. O reconhecimento global, o diálogo com a sociedade e a legitimidade dos critérios técnicos mostram que esses 10 anos são apenas o início de uma longa história de desenvolvimento e inovação no Brasil.
Pandora’s box: how GM mosquitos could have caused Brazil’s microcephaly disaster (The Ecologist)
1st February 2016
Aedes Aegypti mosquito feeding on human blood. This is the species that transmits Zika, and that was genetically engineered by Oxitec using the piggyBac transposon. Photo: James Gathany via jentavery on Flickr (CC BY).
In Brazil’s microcephaly epidemic, one vital question remains unanswered: how did the Zika virus suddenly learn how to disrupt the development of human embryos? The answer may lie in a sequence of ‘jumping DNA’ used to engineer the virus’s mosquito vector – and released into the wild four years ago in the precise area of Brazil where the microcephaly crisis is most acute.
These ‘promiscuous’ transposons have found special favour with genetic engineers, whose goal is to create ‘universal’ systems for transferring genes into any and every species on earth. Almost none of the geneticists has considered the hazards involved.
Since August 2015, a large number of babies in Northeast Brazil have been born with very small heads, a condition known as microcephaly, and with other serious malformations. 4,180 suspected cases have been reported.
Epidemiologists have found a convincing correlation between the incidence of the natal deformities and maternal infections with the Zika virus, first discovered in Uganda’s Zika Valley in 1947, which normally produces non-serious illness.
The correlation has been evidenced through the geographical distrubution of Zika infections and the wave of deformities. Zika virus has also been detected in the amniotic fluids and other tissues of the affected babies and their mothers.
This latter finding was recently reported by AS Oliveira Melo et al in a scientific paperpublished in the journal Ultrasound in Obstetrics & Gynecology, which noted evidence of intra-uterine infection. They also warn:
“As with other intrauterine infections, it is possible that the reported cases of microcephaly represent only the more severely affected children and that newborns with less severe disease, affecting not only the brain but also other organs, have not yet been diagnosed.”
The Brazilian Health Minister, Marcelo Castro, says he has “100% certainty” that there is a link between Zika and microcephaly. His view is supported by the medical community worldwide, including by the US Center for Disease Control.
Oliveira Melo et al draw attention to a mystery that lies at the heart of the affair: “It is difficult to explain why there have been no fetal cases of Zika virus infection reported until now but this may be due to the underreporting of cases, possible early acquisition of immunity in endemic areas or due to the rarity of the disease until now.
“As genomic changes in the virus have been reported, the possibility of a new, more virulent, strain needs to be considered. Until more cases are diagnosed and histopathological proof is obtained, the possibility of other etiologies cannot be ruled out.”
And this is the key question: how – if indeed Zika really is the problem, as appears likely – did this relatively innocuous virus acquire the ability to produce these terrible malformations in unborn human babies?
Oxitec’s GM mosquitoes
An excellent article by Claire Bernish published last week on AntiMedia draws attention to an interesting aspect of the matter which has escaped mainstream media attention: the correlation between the incidence of Zika and the area of release of genetically modified Aedes aegypti mosquitos engineered for male insterility (see maps, above right).
The purpose of the release was to see if it controlled population of the mosquitos, which are the vector of Dengue fever, a potentially lethal disease. The same species also transmits the Zika virus.
The releases took in 2011 and 2012 in the Itaberaba suburb of the city of Juazeiro, Bahia, Northeast Brazil, about 500 km west of ther coastal city of Recife. The experiment was written up in July 2015 in the journal PLOS Neglected Tropical Diseases in a paper titled ‘Suppression of a Field Population of Aedes aegypti in Brazil by Sustained Release of Transgenic Male Mosquitoes’ by Danilo O. Carvalho et al.
An initial ‘rangefinder of 30,000 GM mosquitos per week took place between 19th May and 29th June 2011, followed by a much larger release of 540,000 per week in early 2012, ending on 11th February.
At the end of it the scientists claimed “effective control of a wild population of Ae. aegypti by sustained releases of OX513A male Ae. aegypti. We diminished Ae. aegypti population by 95% (95% CI: 92.2%-97.5%) based on adult trap data and 78% (95% CI: 70.5%-84.8%) based on ovitrap indices compared to the adjacent no-release control area.”
So what’s to worry about?
The idea of the Oxitec mosquitoes is simple enough: the males produce non-viable offspring which all die. So the GM mosqitoes are ‘self-extinguishing’ and the altered genes cannot survive in the wild population. All very clever, and nothing to worry about!
The genetic engineerig method employed by Oxitec allows the popular antibiotic tetracycline to be used to repress the lethality during breeding. But as a side-effect, the lethality is also reduced by the presence of tetracycline in the environment; and as Bernish points out, Brazil is among the world’s biggest users of anti-microbials including tetracycline in its commercial farming sector:
“As a study by the American Society of Agronomy, et. al., explained, ‘It is estimated that approximately 75% of antibiotics are not absorbed by animals and are excreted in waste.’ One of the antibiotics (or antimicrobials) specifically named in that report for its environmental persistence is tetracycline.
In fact, as a confidential internal Oxitec document divulged in 2012, that survival rate could be as high as 15% – even with low levels of tetracycline present. ‘Even small amounts of tetracycline can repress’ the engineered lethality. Indeed, that 15% survival rate was described by Oxitec.”
She then quotes the leaked Oxitec paper: “After a lot of testing and comparing experimental design, it was found that [researchers] had used a cat food to feed the [OX513A] larvae and this cat food contained chicken. It is known that tetracycline is routinely used to prevent infections in chickens, especially in the cheap, mass produced, chicken used for animal food. The chicken is heat-treated before being used, but this does not remove all the tetracycline. This meant that a small amount of tetracycline was being added from the food to the larvae and repressing the [designed] lethal system.”
So in other words, there is every possibility for Oxitec’s modified genes to persist in wild populations of Aedes aegypti mosquitos, especially in the environmental presence of tetracycline which is widely present in sewage, septic tanks, contaminated water sources and farm runoff.
‘Promiscuous’ jumping genes
On the face of it, there is no obvious way in which the spread of Oxitec’s GM mosquitos into the wild could have anything to do with Brazil’s wave of micrcophaly. Is there?
Actually, yes. The problem may arise from the use of the ‘transposon’ (‘jumping’ sequence of DNA used in the genetic engineering process to introduce the new genes into the target organism). There are several such DNA sequences in use, and one of the most popular is known as known as piggyBac.
As a 2001 review article by Dr Mae Wan Ho shows, piggyBac is notoriously active, inserting itself into genes way beyond its intended target: “These ‘promiscuous’ transposons have found special favour with genetic engineers, whose goal is to create ‘universal’ systems for transferring genes into any and every species on earth. Almost none of the geneticists has considered the hazards involved …
“It would seem obvious that integrated transposon vectors may easily jump out again, to another site in the same genome, or to the genome of unrelated species. There are already signs of that in the transposon, piggyBac, used in the GM bollworms to be released by the USDA this summer.
The piggyBac transposon was discovered in cell cultures of the moth Trichopulsia, the cabbage looper, where it caused high rates of mutations in the baculovirus infecting the cells by jumping into its genes … This transposon was later found to be active in a wide range of species, including the fruitfly Drosophila, the mosquito transmitting yellow fever, Aedes aegypti, the medfly, Ceratitis capitata, and the original host, the cabbage looper.
“The piggyBac vector gave high frequencies of transpositions, 37 times higher than mariner and nearly four times higher than Hirmar.”
In a later 2014 report Dr Mae Wan Ho returned to the theme with additional detail and fresh scientific evidence (please refer to her original article for references): “The piggyBac transposon was discovered in cell cultures of the moth Trichopulsia, the cabbage looper, where it caused high rates of mutations in the baculovirus infecting the cells by jumping into its genes …
“There is also evidence that the disabled piggyBac vector carrying the transgene, even when stripped down to the bare minimum of the border repeats, was nevertheless able to replicate and spread, because the transposase enzyme enabling the piggyBac inserts to move can be provided by transposons present in all genomes.
“The main reason initially for using transposons as vectors in insect control was precisely because they can spread the transgenes rapidly by ‘non-Mendelian’ means within a population, i.e., by replicating copies and jumping into genomes, thereby ‘driving’ the trait through the insect population. However, the scientists involved neglected the fact that the transposons could also jump into the genomes of the mammalian hosts including human beings …
“In spite of instability and resulting genotoxicity, the piggyBac transposon has been used extensively also in human gene therapy. Several human cell lines have been transformed, even primary human T cells using piggyBac. These findings leave us little doubt that the transposon-borne transgenes in the transgenic mosquito can transfer horizontally to human cells. The piggyBac transposon was found to induce genome wide insertionmutations disrupting many gene functions.”
Has the GM nightmare finally come true?
So down to the key question: was the Oxitec’s GM Aedes aegypti male-sterile mosquito released in Juazeiro engineered with the piggyBac transposon? Yes, it was. And that creates a highly significant possibility: that Oxitec’s release of its GM mosquitos led directly to the development of Brazil’s microcephaly epidemic through the following mechanism:
1. Many of the millions of Oxitec GM mosquitos released in Juazeiro in 2011/2012 survive, assisted, but not dependent on, the presence of tetracycline in the environment.
2. These mosquitos interbreed with with the wild population and their novel genes become widespread.
3. The promiscuous piggyBac transposon now present in the local Aedes aegyptipopulation takes the opportunity to jump into the Zika virus, probably on numerous occasions.
4. In the process certain mutated strains of Zika acquire a selective advantage, making them more virulent and giving them an enhanced ability to enter and disrupt human DNA.
5. One way in which this manifests is by disrupting a key stage in the development of human embryos in the womb, causing microcephaly and the other reported deformations. Note that as Melo Oliveira et al warn, there are almost certainly other manifestations that have not yet been detected.
6. It may be that the piggyBac transposon has itself entered the DNA of babies exposed in utero to the modified Zika virus. Indeed, this may form part of the mechanism by which embryonic development is disrupted.
In the latter case, one implication is that the action of the gene could be blocked by giving pregnant women tetracycline in order to block its activity. The chances of success are probably low, but it has to be worth trying.
No further releases of GM insects!
While I am certainly not claiming that this is what actually took place, it is at least a credible hypothesis, and moreover a highly testable one. Nothing would be easier for genetic engineers than to test amniotic fluids, babies’ blood, wild Aedes mosquitos and the Zika virus itself for the presence of the piggyBac transposon, using well established and highly sensitive PCR (polymerase chain reaction) techniques.
If this proves to be the case, those urging caution on the release of GMOs generally, and transgenic insects bearing promiscuous transposons in particular, will have been proved right on all counts.
But most important, such experiments, and any deployment of similar GM insects, must be immediately halted until the possibilities outlined above can be safely ruled out. There are plans, for example, to release similarly modified Anopheles mosquitos as an anti-malarial measure.
There are also calls for even more of the Oxitec Aedes aegypti mosquitos to be released in order to halt the transmission of the Zika virus. If that were to take place, it could give rise to numerous new mutations of the virus with the potential to cause even more damage to the human genome, that we can, at this stage, only guess at.
The Zika virus is a flavivirus closely related to notorious pathogens including dengue, yellow fever, Japanese encephalitis, and West Nile virus. The virus is transmitted by mosquitoes in the genus Aedes, especially A. aegypti, which is a known vector for many of Zika’s relatives. Symptoms of the infection appear three to twelve days post bite. Most people are asymptomatic, which means they show no signs of infection. The vast majority of those who do show signs of infection report fever, rash, joint pain, and conjunctivitis (red eyes), according to the U.S. Centers for Disease Control. After a week or less, the symptoms tend to go away on their own. Serious complications have occurred, but they have been extremely rare.
The Zika virus isn’t new. It was first isolated in 1947 from a Rhesus monkey in the Zika Forest in Uganda, hence the pathogen’s name. The first human cases were confirmed in Uganda and Tanzania in 1952, and by 1968, the virus had spread to Nigeria. But since then, the virus has found its way out of Africa. The first major outbreak occurred on the island of Yap in Micronesia for 13 weeks 2007, during which 185 Zika cases were suspected (49 of those were confirmed, with another 59 considered probable). Then, in October 2013, an outbreak began in French Polynesia; around 10,000 cases were reported, less than 100 of which presented with severe neurological or autoimmune complications. One confirmed case of autochthonous transmission occurred in Chile in 2014, which means a person was infected while they were in Chile rather than somewhere else. Cases were also reported that year from several Pacific Islands. The virus was detected in Chile until June 2014, but then it seemed to disappear.
Fast forward to May 2015, when the Pan American Health Organization (PAHO) issued an alert regarding the first confirmed Zika virus infection in Brazil. Since then, several thousand suspected cases of the disease and a previously unknown complication—a kind of birth defect known as microcephaly where the baby’s brain is abnormally small—have been reported from Brazil. (It’s important to note that while the connection between the virus and microcephaly is strongly suspected, the link has yet to be conclusively demonstrated.)
The recent spread of the virus has been described as “explosive”; Zika has now been detected in 25 countries and territories. The rising concern over both the number of cases and reports of serious complications has led the most affected areas in Brazil to declare a state of emergency, and on Monday, The World Health Organization’s Director-General will convene an International Health Regulations Emergency Committee on Zika virus and the observed increase in neurological disorders and neonatal malformations. At this emergency meeting, the committee will discuss mitigation strategies and decide whether the organization will officially declare the virus a “Public Health Emergency of International Concern.”
GM to the Rescue
The mosquito to blame for the outbreak—Aedes aegypti—doesn’t belong in the Americas. It’s native to Africa, and was only introduced in the new world when Europeans began to explore the globe. In the 20th century, mosquito control programs nearly eradicated the unwelcome menace from the Americas (largely thanks to the use of the controversial pesticide DDT); as late as the mid 1970s, Brazil and 15 other nations were Aedes aegypti-free. But despite the successes, eradication efforts were halted, allowing the mosquito to regain its lost territory.
Effective control measures are expensive and difficult to maintain, so at the tail end of the 20th century and into the 21st, scientists began to explore creative means of controlling mosquito populations, including the use of genetic modification. Oxitec’s mosquitoes are one of the most exciting technologies to have emerged from this period. Here’s how they work, as I described in a post almost exactly a year ago:
While these mosquitoes are genetically modified, they aren’t “cross-bred with the herpes simplex virus and E. colibacteria” (that would be an interkingdom ménage à trois!)—and no, they cannot be “used to bite people and essentially make them immune to dengue fever and chikungunya” (they aren’t carrying a vaccine!). The mosquitoes that Oxitec have designed are what scientists call “autocidal” or possess a “dominant lethal genetic system,” which is mostly fancy wording for “they die all by themselves”. The males carry inserted DNA which causes the mosquitoes to depend upon a dietary supplement that is easy to provide in the lab, but not available in nature. When the so-called mutants breed with normal females, all of the offspring require the missing dietary supplement because the suicide genes passed on from the males are genetically dominant. Thus, the offspring die before they can become adults. The idea is, if you release enough such males in an area, then the females won’t have a choice but to mate with them. That will mean there will be few to no successful offspring in the next generation, and the population is effectively controlled.
Male mosquitoes don’t bite people, so they cannot serve as transmission vectors for Zika or any other disease. As for fears that GM females will take over: less than 5% of all offspring survive in the laboratory, and as Glen Slade, director of Oxitec’s Brazilian branch notes, those are the best possible conditions for survival. “It is considered unlikely that the survival rate is anywhere near that high in the harsher field conditions since offspring reaching adulthood will have been weakened by the self-limiting gene,” he told me. And contrary to what the conspiracy theorists claim, scientists have shown that tetracycline in the environment doesn’t increase that survival rate.
Brazil, a hotspot for dengue and other such diseases, is one of the countries where Oxitec is testing their mozzies—so far, everywhere that Oxitec’s mosquitoes have been released, the local populations have been suppressed by about 90%.
Wrong Place, Wrong Time
Now that we’ve covered the background on the situation, let’s dig into the conspiracy theory. We’ll start with the main argument laid out as evidence: that the Zika outbreak began in the same location at the same time as the first Oxitec release:
Though it’s often said, it’s worth repeating: correlation doesn’t equal causation. If it did, then Nicholas Cage is to blame for people drowning (Why, Nick? WHY?). But even beyond that, there are bigger problems with this supposed correlation: even by those maps, the site of release is on the fringe of the Zika hotspot, not the center of it. Just look at the two overlaid:
The epicenter of the outbreak and the release clearly don’t line up—the epicenter is on the coast rather than inland where the map points. Furthermore, the first confirmed cases weren’t reported in that area, but in the town of Camaçari, Bahia, which is—unsurprisingly—on the coast and several hundred kilometers from the release site indicated.
But perhaps more importantly, the location on the map isn’t where the mosquitoes were released. That map points to Juazeiro de Norte, Ceará, which is a solid 300 km away from Juazeiro, Bahia—the actual site of the mosquito trial. That location is even more on the edge of the Zika-affected area:
The mistake was made initially by the Redditor who proposed the conspiracy theory and has been propagated through lazy journalistic practices by every proponent since. Here’s a quick tip: if you’re basing your conspiracy theory on location coincidence, it’s probably a good idea to actually get the location right.
By July 2015, shortly after the GM mosquitoes were first released into the wild in Juazeiro, Brazil, Oxitec proudly announced they had “successfully controlled the Aedes aegypti mosquito that spreads dengue fever, chikungunya and zika virus, by reducing the target population by more than 90%.”
A new control effort employing Oxitec mosquitoes did begin in April 2015, but not in Juaziero, or any of the northeastern states of Brazil where the disease outbreak is occurring. As another press release from Oxitec states, the 2015 releases of their GM mosquitoes were in Piracicaba, São Paulo, Brazil:
Following approval by Brazil’s National Biosafety Committee (CTNBio) for releases throughout the country, Piracicaba’s CECAP/Eldorado district became the world’s first municipality to partner directly with Oxitec and in April 2015 started releasing its self-limiting mosquitoes whose offspring do not survive. By the end of the calendar year, results had already indicated a reduction in wild mosquito larvae by 82%. Oxitec’s efficacy trials across Brazil, Panama and the Cayman Islands all resulted in a greater than 90% suppression of the wild Ae. aegypti mosquito population–an unprecedented level of control.
Based on the positive results achieved to date, the ‘Friendly Aedes aegypti Project’ in CECAP/Eldorado district covering 5,000 people has been extended for another year. Additionally, Oxitec and Piracicaba have signed a letter of intent to expand the project to an area of 35,000-60,000 residents. This geographic region includes the city’s center and was chosen due to the large flow of people commuting between it and surrounding neighborhoods which may contribute to the spread of infestations and infections.
Piracicaba, for the record, is more than 1300 miles away from the Zika epicenter:
So not only did the conspiracy theorists get the location of the first Brazil release wrong, they either got the date wrong, too, or got the location of the 2015 releases really, really off. Either way, the central argument that the release of GM mosquitoes by Oxitec coincides with the first cases of Zika virus simply doesn’t hold up.
Scientists Speak Out
As this ludicrous conspiracy theory has spread, so, too, has the scientific opposition to it. “Frankly, I’m a little sick of this kind of anti-science platform,” said vector ecologist Tanjim Hossain from the University of Miami, when I asked him what he thought. “This kind of fear mongering is not only irresponsible, but may very well be downright harmful to vulnerable populations from a global health perspective.”
Despite the specious allusions made by proponents of the conspiracy, this is still not Jurassic Park, says Hossain.
“We have a problem where ZIKV is spreading rapidly and is widely suspected of causing serious health issues,” he continued. “How do we solve this problem? An Integrated Vector Management (IVM) approach is key. We need to use all available tools, old and new, to combat the problem. GM mosquitoes are a fairly new tool in our arsenal. The way I see it, they have the potential to quickly reduce a local population of vector mosquitoes to near zero, and thereby can also reduce the risk of disease transmission. This kind of strategy could be particularly useful in a disease outbreak ‘hotspot’ because you could hypothetically stop the disease in its tracks so to speak.”
Other scientists have shared similar sentiments. Alex Perkins, a biological science professor at Notre Dame, told Business Insider that rather than causing the outbreak, GM mosquitoes might be our best chance to fight it. “It could very well be the case that genetically modified mosquitos could end up being one of the most important tools that we have to combat Zika,” Perkins said. “If anything, we should potentially be looking into using these more.”
Brazilian authorities couldn’t be happier with the results so far, and are eager to continue to fight these deadly mosquitoes by any means they can. “The initial project in CECAP/Eldorado district clearly showed that the ‘friendly Aedes aegypti solution’ made a big difference for the inhabitants of the area, helping to protect them from the mosquito that transmits dengue, Zika and chikungunya,” said Pedro Mello, secretary of health in Piracicaba. He notes that during the 2014/2015 dengue season, before the trial there began, there were 133 cases of dengue. “In 2015/2016, after the beginning of the Friendly Aedes aegypti Project, we had only one case.”
It’s long past time to stop villainizing Oxitec’s mosquitoes for crimes they didn’t commit. Claire Bernish, The Daily MFail, Mirror and everyone else who has spread these baseless accusations: I’m talking to you. The original post was in the Conspiracy subreddit—what more of a red flag for “this is wildly inaccurate bullsh*t” do you need? (After all, if this is a legit source, where are your reports on the new hidden messages in the $100 bill? or why the Illuminati wants people to believe in aliens?). It’s well known that large-scale conspiracy theories are mathematically challenged. Don’t just post whatever crap is spewed on the internet because you know it’ll get you a few clicks. It’s dishonest, dangerous, and, frankly, deplorable to treat nonsense as possible truth just to prey upon your audience’s very real fears of an emerging disease. You, with your complete lack of integrity, are maggots feeding on the decay of modern journalism, and I mean that with no disrespect to maggots.
When your reasons are worse than useless, sometimes the most rational choice is a random stab in the dark
by Michael Schulson
Illustration by Tim McDonagh
Michael Schulson is an American freelance writer. His work has appeared in Religion Dispatches, The Daily Beast, and Religion and Politics, among others. He lives in Durham, North Carolina.
We could start with birds, or we could start with Greeks. Each option has advantages.
Let’s flip a coin. Heads and it’s the Greeks, tails and it’s the birds.
In the 1970s, a young American anthropologist named Michael Dove set out for Indonesia, intending to solve an ethnographic mystery. Then a graduate student at Stanford, Dove had been reading about the Kantu’, a group of subsistence farmers who live in the tropical forests of Borneo. The Kantu’ practise the kind of shifting agriculture known to anthropologists as swidden farming, and to everyone else as slash-and-burn. Swidden farmers usually grow crops in nutrient-poor soil. They use fire to clear their fields, which they abandon at the end of each growing season.
Like other swidden farmers, the Kantu’ would establish new farming sites ever year in which to grow rice and other crops. Unlike most other swidden farmers, the Kantu’ choose where to place these fields through a ritualised form of birdwatching. They believe that certain species of bird – the Scarlet-rumped Trogon, the Rufous Piculet, and five others – are the sons-in-law of God. The appearances of these birds guide the affairs of human beings. So, in order to select a site for cultivation, a Kantu’ farmer would walk through the forest until he spotted the right combination of omen birds. And there he would clear a field and plant his crops.
Dove figured that the birds must be serving as some kind of ecological indicator. Perhaps they gravitated toward good soil, or smaller trees, or some other useful characteristic of a swidden site. After all, the Kantu’ had been using bird augury for generations, and they hadn’t starved yet. The birds, Dove assumed, had to be telling the Kantu’something about the land. But neither he, nor any other anthropologist, had any notion of what that something was.
He followed Kantu’ augurers. He watched omen birds. He measured the size of each household’s harvest. And he became more and more confused. Kantu’ augury is so intricate, so dependent on slight alterations and is-the-bird-to-my-left-or-my-right contingencies that Dove soon found there was no discernible correlation at all between Piculets and Trogons and the success of a Kantu’ crop. The augurers he was shadowing, Dove told me, ‘looked more and more like people who were rolling dice’.
Stumped, he switched dissertation topics. But the augury nagged him. He kept thinking about it for ‘a decade or two’. And then one day he realised that he had been looking at the question the wrong way all the time. Dove had been asking whether Kantu’augury imparted useful ecological information, as opposed to being random. But what if augury was useful precisely because it was random?
For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify
Tropical swidden agriculture is a fundamentally unpredictable enterprise. The success of a Kantu’ swidden depends on rainfall, pest outbreaks and river levels, among other factors. A patch of forest that might yield a good harvest in a rainy year could be unproductive in a drier year, or in a year when a certain pest spreads. And things such as pest outbreaks or the weather are pretty much impossible to predict weeks or months in the future, both for humans and for birds.
In the face of such uncertainty, though, the human tendency is to seek some kind of order – to come up with a systematic method for choosing a field site, and, in particular, to make decisions based on the conditions of the previous year.
Neither option is useful. Last year’s conditions have pretty much no bearing on events in the years ahead (a rainy July 2013 does not have any bearing on the wetness of July 2014). And systematic methods can be prey to all sorts of biases. If, for example, a Kantu’ farmer predicted that the water levels would be favourable one year, and so put all his fields next to the river, a single flood could wipe out his entire crop. For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify. And bird augury was an especially effective way to bring about that kind of diversification.
It makes sense that it should have taken Dove some 15 years to realise that randomness could be an asset. As moderns, we take it for granted that the best decisions stem from a process of empirical analysis and informed choice, with a clear goal in mind. That kind of decision-making, at least in theory, undergirds the ways that we choose political leaders, play the stock market, and select candidates for schools and jobs. It also shapes the way in which we critique the rituals and superstitions of others. But, as the Kantu’ illustrate, there are plenty of situations when random chance really is your best option. And those situations might be far more prevalent in our modern lives than we generally admit.
Over the millennia, cultures have expended a great deal of time, energy and ingenuity in order to introduce some element of chance into decision-making. Naskapi hunters in the Canadian province of Labrador would roast the scapula of a caribou in order to determine the direction of their next hunt, reading the cracks that formed on the surface of the bone like a map. In China, people have long sought guidance in the passages of the I Ching, using the intricate manipulation of 49 yarrow stalks to determine which section of the book they ought to consult. The Azande of central Africa, when faced with a difficult choice, would force a powdery poison down a chicken’s throat, finding the answer to their question in whether or not the chicken survived – a hard-to-predict, if not quite random, outcome. (‘I found this as satisfactory a way of running my home and affairs as any other I know of,’ wrote the British anthropologist E E Evans-Pritchard, who adopted some local customs during his time with the Azande in the 1920s).
The list goes on. It could – it does – fill books. As any blackjack dealer or tarot reader might tell you, we have a love for the flip of the card. Why shouldn’t we? Chance has some special properties. It is a swift, consistent, and (unless your chickens all die) relatively cheap decider. Devoid of any guiding mind, it is subject to neither blame nor regret. Inhuman, it can act as a blank surface on which to descry the churning of fate or the work of divine hands. Chance distributes resources and judges disputes with perfect equanimity.
The sanitising effect of augury cleans out any bad reasons
Above all, chance makes its selection without any recourse to reasons. This quality is perhaps its greatest advantage, though of course it comes at a price. Peter Stone, a political theorist at Trinity College, Dublin, and the author of The Luck of the Draw: The Role of Lotteries in Decision Making (2011), has made a career of studying the conditions under which such reasonless-ness can be, well, reasonable.
‘What lotteries are very good for is for keeping bad reasons out of decisions,’ Stone told me. ‘Lotteries guarantee that when you are choosing at random, there will be no reasons at all for one option rather than another being selected.’ He calls this the sanitising effectof lotteries – they eliminate all reasons from a decision, scrubbing away any kind of unwanted influence. As Stone acknowledges, randomness eliminates good reasons from the running as well as bad ones. He doesn’t advocate using chance indiscriminately. ‘But, sometimes,’ he argues, ‘the danger of bad reasons is bigger than the loss of the possibility of good reasons.’
For an example, let’s return to the Kantu’. Besides certain basic characteristics, when it comes to selecting a swidden site in the forest, there are nogood reasons by which to choose a site. You just don’t know what the weather and pests will look like. As a result, any reasons that a Kantu’ farmer uses will either be neutral, or actively harmful. The sanitising effect of augury cleans out those bad reasons. The Kantu’ also establish fields in swampland, where the characteristics of a good site are much more predictable – where, in other words, good reasons are abundant. In the swamps, as it happens, the Kantu’ don’t use augury to make their pick.
Thinking about choice and chance in this way has applications outside rural Borneo, too. In particular, it can call into question some of the basic mechanisms of our rationalist-meritocratic-democratic system – which is why, as you might imagine, a political theorist such as Stone is so interested in randomness in the first place.
Around the same time that Michael Dove was pondering his riddle in a Kantu’ longhouse, activists and political scientists were beginning to revive the idea of filling certain political positions by lottery, a process known as sortition.
The practice has a long history. Most public officials in democratic Athens were chosen by lottery, including the nine archons who were chosen by sortition from a significant segment of the population. The nobles of Renaissance Venice used to select their head of state, the doge, through a complicated, partially randomised process. Jean-Jacques Rousseau, in The Social Contract (1762), argued that lotteries would be the norm in an ideal democracy, giving every citizen an equal chance of participating in every part of the government (Rousseau added that such ideal democracies did not exist). Sortition survives today in the process of jury selection, and it crops up from time to time in unexpected places. Ontario and British Columbia, for example, have used randomly selected panels of Canadian citizens to propose election regulations.
Advocates of sortition suggest applying that principle more broadly, to congresses and parliaments, in order to create a legislature that closely reflects the actual composition of a state’s citizenship. They are not (just to be clear) advocating that legislators randomly choosepolicies. Few, moreover, would suggest that non-representative positions such as the US presidency be appointed by a lottery of all citizens. The idea is not to banish reason from politics altogether. But plenty of bad reasons can influence the election process – through bribery, intimidation, and fraud; through vote-purchasing; through discrimination and prejudices of all kinds. The question is whether these bad reasons outweigh the benefits of a system in which voters pick their favourite candidates.
By way of illustration: a handful of powerful families and influential cliques dominated Renaissance Venice. The use of sortition in selection of the doge, writes the historian Robert Finlay in Politics in Renaissance Venice (1980), was a means of ‘limiting the ability of any group to impose its will without an overwhelming majority or substantial good luck’. Americans who worry about unbridled campaign-spending by a wealthy few might relate to this idea.
Or consider this. In theory, liberal democracies want legislatures that accurately reflect their citizenship. And, presumably, the qualities of a good legislator (intelligence, integrity, experience) aren’t limited to wealthy, straight, white men. The relatively homogeneous composition of our legislatures suggests that less-than-ideal reasons are playing a substantial role in the electoral process. Typically, we just look at this process and wonder how to eliminate that bias. Advocates of sortition see conditions ripe for randomness.
Once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance
It’s not only politics where the threat of bad reasons, or a lack of any good reasons, makes the luck of the draw seem attractive. Take college admissions. When Columbia University accepts just 2,291 of its roughly 33,000 applicants, as it did this year, it’s hard to imagine that the process was based strictly on good reasons. ‘College admissions are already random; let’s just admit it and begin developing a more effective system,’ wrote the education policy analyst Chad Aldeman on the US daily news site Inside Higher Ed backin 2009. He went on to describe the notion of collegiate meritocracy as ‘a pretension’ and remarked: ‘A lottery might be the answer.’
The Swarthmore College professor Barry Schwartz, writing in The Atlantic in 2012, came to a similar conclusion. He proposed that, once schools have narrowed down their applicant pools to a well-qualified subset, they could just draw names. Some schools in the Netherlands already use a similar system. ‘A lottery like this won’t correct the injustice that is inherent in a pyramidal system in which not everyone can rise to the top,’ wrote Schwartz. ‘But it will reveal the injustice by highlighting the role of contingency and luck.’ Once certain standards are met, no really good reasons remain to discriminate between applicant No 2,291 (who gets into Columbia) and applicant No 2,292 (who does not). And once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance.
But perhaps not the most popular one. When randomness is added to a supposedly meritocratic system, it can inspire quite a backlash. In 2004, the International Skating Union (ISU) introduced a new judging system for figure-skating competitions. Under this system – which has since been tweaked – 12 judges evaluated each skater, but only nine of those votes, selected at random, actually counted towards the final tally (the ancient Athenians judged drama competitions in a similar way). Figure skating is a notoriously corrupt sport, with judges sometimes forming blocs that support each other’s favoured skaters. In theory, a randomised process makes it harder to form such alliances. A tit-for-tat arrangement, after all, doesn’t work as well if it’s unclear whether your partners will be able to reciprocate.
But the new ISU rules did more than simply remove a temptation to collude. As statisticians pointed out, random selection will change the outcome of some events. Backing their claims with competition data, they showed how other sets of randomly selected votes would have yielded different results, actually changing the line-up of the medal podium in at least one major competition. Even once all the skaters had performed, ultimate victory depended on the luck of the draw.
There are two ways to look at this kind of situation. The first way – the path of outrage – condemns a system that seems fundamentally unfair. A second approach would be to recognise that the judging process is already subjective and always will be. Had a different panel of 12 judges been chosen for the competition, the result would have varied, too. The ISU system simply makes that subjectivity more apparent, even as it reduces the likelihood that certain obviously bad influences, such as corruption, will affect the final result.
Still, most commentators opted for righteous outrage. That isn’t surprising. The ISU system conflicts with two common modern assumptions: that it is always desirable (and usually possible) to eliminate uncertainty and chance from a situation; and that achievement is perfectly reflective of effort and talent. Sortition, college admission lotteries, and randomised judging run against the grain of both of these premises. They embrace uncertainty as a useful part of their processes, and they fail to guarantee that the better citizen or student or skater, no matter how much she drives herself to success, will be declared the winner.
Let me suggest that, in the fraught and unpredictable world in which we live, both of those ideals – total certainty and perfect reward – are delusional. That’s not to say that we shouldn’t try to increase knowledge and reward success. It’s just that, until we reach that utopia, we might want to come to terms with the reality of our situation, which is that our lives are dominated by uncertainty, biases, subjective judgments and the vagaries of chance.
In the novel The Man in the High Castle (1962), the American sci-fi maestro Philip K Dick imagines an alternative history in which Germany and Japan win the Second World War. Most of the novel’s action takes place in Japanese-occupied San Francisco, where characters, both Japanese and American, regularly use the I Ching to guide difficult decisions in their business lives and personal affairs.
Something, somewhere, is always playing dice
As an American with no family history of divination, I’ll admit to being enchanted by Dick’s vision of a sci-fi world where people yield some of their decision-making power to the movements of dried yarrow stems. There’s something liberating, maybe, in being able to acknowledge that the reasons we have are often inadequate, or downright poor. Without needing to impose any supernatural system, it’s not hard to picture a society in which chance plays a more explicit, more accepted role in the ways in which we distribute goods, determine admissions to colleges, give out jobs to equally matched applicants, pick our elected leaders, and make personal decisions in our own lives.
Such a society is not a rationalist’s nightmare. Instead, in an uncertain world where bad reasons do determine so much of what we decide, it’s a way to become more aware of what factors shape the choices we make. As Peter Stone told me, paraphrasing Immanuel Kant, ‘the first task of reason is to recognise its own limitations’. Nor is such a society more riddled with chanciness than our own. Something, somewhere, is always playing dice. The roles of coloniser and colonised, wealthy and poor, powerful and weak, victor and vanquished, are rarely as predestined as we imagine them to be.
Dick seems to have understood this. Certainly, he embraced chance in a way that few other novelists ever have. Years after he wrote The Man in the High Castle, Dick explained to an interviewer that, setting aside from planning and the novelist’s foresight, he had settled key details of the book’s plot by flipping coins and consulting the I Ching.
Peter Doyle claims there was a “fundamental illegitimacy” in Christine Lagarde’s appointment
A top economist at the International Monetary Fund has poured scorn on its “tainted” leadership and said he is “ashamed” to have worked there.
Peter Doyle said in a letter to the IMF executive board that he wanted to explain his resignation after 20 years.
He writes of “incompetence”, “failings” and “disastrous” appointments for the IMF’s managing director, stretching back 10 years.
No one from the Washington-based IMF was immediately available for comment.
Mr Doyle, former adviser to the IMF’s European Department, which is running the bailout programs for Greece, Portugal and Ireland, said the Fund’s delay in warning about the urgency of the global financial crisis was a failure of the “first order”.
In the letter, dated 18 June and obtained by the US broadcaster CNN, Mr Doyle said the failings of IMF surveillance of the financial crisis “are, if anything, becoming more deeply entrenched”.
He writes: “This fact is most clear in regard to appointments for managing director which, over the past decade, have all-too-evidently been disastrous.
“Even the current incumbent [Christine Lagarde] is tainted, as neither her gender, integrity, or elan can make up for the fundamental illegitimacy of the selection process.”
Mr Doyle is thought to be echoing here widespread criticism that the head of the IMF is always a European, while the World Bank chief is always a US appointee.
Mr Doyle concludes his letter: “There are good salty people here. But this one is moving on. You might want to take care not to lose the others.”
The IMF could not be reached immediately by the BBC. However, CNN reported that a Fund spokesman told it that there was nothing to substantiate Mr Doyle’s claims and that the IMF had held its own investigations into surveillance of the financial crisis.
Andrew WalkerBBC World Service Economics correspondent
Peter Doyle’s letter is short but the criticism excoriating. Perhaps the bigger of the two main charges is that the IMF failed to warn enough about the problems that led to the global financial crises.
The IMF has had investigations which have, up to a point, made similar criticisms, but not in such inflammatory terms. The IMF did issue some warnings, but the allegation that they were not sustained or timely enough and were actively suppressed raises some very big questions about the IMF’s role.
Then there is the description of the managing director as tainted. It’s not personal. It’s a familiar attack on a process which always selects a European. It’s still striking, though, to hear it from someone so recently on the inside.