Antes encaradas com desconfiança pela comunidade científica, as metodologias de intervenção artificial no meio ambiente com o objetivo de frear os efeitos devastadores do aquecimento global estão sendo consideradas agora como recursos a serem aplicados em última instância (já que iniciativas para reduzir a emissão de gases dependem diretamente da ação coletiva e demandam décadas para que tenham algum tipo de efeito benéfico). É possível que não tenhamos esse tempo, de acordo com alguns pesquisadores da área, os quais têm atraído investimentos e muita atenção.
Fazendo parte de um campo também referenciado como geoengenharia solar, grande parte dos métodos se vale da emissão controlada de partículas na atmosfera, responsáveis por barrar a energia recebida pelo nosso planeta e direcioná-la novamente ao espaço, criando uma espécie de resfriamento semelhante ao gerado por erupções vulcânicas.
Ainda que não atuem sobre a poluição, por exemplo, cientistas consideram que, diante de tempestades cada vez mais agressivas, tornados de fogo, inundações e outros desastres naturais, tais ações seriam interessantes enquanto soluções mais eficazes não são desenvolvidas.
Diretor do Sabin Center for Climate Change Law, na Columbia Law School, e editor de um livro sobre a tecnologia e suas implicações legais, Michael Gerrard exemplificou a situação em entrevista ao The New York Times: “Estamos enfrentando uma ameaça existencial. Por isso, é necessário que analisemos todas as opções”.
“Gosto de comparar a geoengenharia a uma quimioterapia para o planeta: se todo o resto estiver falhando, resta apenas tentar”, ele defendeu.
Desastres naturais ocasionados pelo aquecimento global tornam urgente a ação de intervenções, segundo pesquisadores. Fonte: Unsplash
Dois pesos e duas medidas
Entre aquelas que se destacam, pode ser citada a ação empreendida por uma organização não governamental chamada SilverLining, que concedeu US$ 3 milhões a diversas universidades e outras instituições para que se dediquem à busca de respostas para questões práticas. Um exemplo é encontrar a altitude ideal para a aplicação de aerossóis e como inserir a quantidade mais indicada, verificando seus efeitos sobre a cadeia de produção de alimentos mundial.
Chris Sacca, cofundador da Lowercarbon Capital, um grupo de investimentos que é um dos financiadores da SilverLining, declarou em tom alarmista: “A descarbonização é necessária, mas vai demorar 20 anos ou mais para que ocorra. Se não explorarmos intervenções climáticas como a reflexão solar neste momento, condenaremos um número incontável de vidas, espécies e ecossistemas ao calor”.
Outra contemplada por somas substanciais foi a National Oceanic and Atmospheric Administration, que recebeu do congresso norte-americano US$ 4 milhões justamente para o desenvolvimento de tecnologias do tipo, assim como o monitoramento de uso secreto de tais soluções por outros países.
Douglas MacMartin, pesquisador de Engenharia Mecânica e aeroespacial na Universidade Cornell, afirmou que “é certo o poder da humanidade de resfriar as coisas, mas o que não está claro é o que vem a seguir”.
Se, por um lado, o planeta pode ser resfriado artificialmente; por outro, não se sabe o que virá. Fonte: Unsplash
Existe uma maneira
Para esclarecer as possíveis consequências de intervenções dessa magnitude, MacMartin desenvolverá modelos de efeitos climáticos específicos oriundos da injeção de aerossóis na atmosfera acima de diferentes partes do globo e altitudes. “Dependendo de onde você colocar [a substância], terá efeitos diferentes nas monções na Ásia e no gelo marinho do Ártico“, ele apontou.
O Centro Nacional de Pesquisa Atmosférica em Boulder, Colorado, financiado também pela SilverLining, acredita ter o sistema ideal para isso — o qual é considerado o mais sofisticado do mundo. Com ele, serão executadas centenas de simulações e, assim, especialistas procurarão o que chamam de ponto ideal, no qual a quantidade de resfriamento artificial que pode reduzir eventos climáticos extremos não cause mudanças mais amplas nos padrões regionais de precipitação ou impactos semelhantes.
“Existe uma maneira, pelo menos em nosso modelo de mundo, de ver se podemos alcançar um sem acionar demais o outro?” questionou Jean-François Lamarque, diretor do laboratório de Clima e Dinâmica Global da instituição. Ainda não há resposta para essa dúvida, mas soluções sustentáveis estão sendo analisadas por pesquisadores australianos, que utilizariam a emissão de água salgada para tornar nuvens mais reflexivas, assim indicando resultados promissores de testes.
Dessa maneira, quem sabe as perdas de corais de recife que testemunhamos tenham data para acabar. Quanto ao resto, bem, só o tempo mostrará.
Science has taken center stage during the COVID-19 pandemic. Early on, as SARS-CoV-2 started spreading around the globe, many researchers pivoted to focus on studying the virus. At the same time, some scientists and science advisors—experts responsible for providing scientific information to policymakers—gained celebrity status as they calmly and cautiously updated the public on the rapidly evolving situation and lent their expertise to help governments make critical decisions, such as those relating to lockdowns and other transmission-slowing measures.
“Academia, in the case of COVID, has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible,” says Chris Tyler, the director of research and policy in University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP).
But the pace at which COVID-related science has been conducted and disseminated during the pandemic has also revealed the challenges associated with translating fast-accumulating evidence for an audience not well versed in the process of science. As research findings are speedily posted to preprint servers, preliminary results have made headlines in major news outlets, sometimes without the appropriate dose of scrutiny.
Some politicians, such as Brazil’s President Jair Bolsonaro, have been quick to jump on premature findings, publicly touting the benefits of treatments such as hydroxychloroquine with minimal or no supporting evidence. Others have pointed to the flip-flopping of the current state of knowledge as a sign of scientists’ untrustworthiness or incompetence—as was seen, for example, in the backlash against Anthony Fauci, one of the US government’s top science advisors.
Some comments from world leaders have been even more concerning. “For me, the most shocking thing I saw,” Tyler says, “was Donald Trump suggesting the injection of disinfectant as a way of treating COVID—that was an eye-popping, mind-boggling moment.”
Still, Tyler notes that there are many countries in which the relationship between the scientific community and policymakers during the course of the pandemic has been “pretty impressive.” As an example, he points to Germany, where the government has both enlisted and heeded the advice of scientists across a range of disciplines, including epidemiology, virology, economics, public health, and the humanities.
Researchers will likely be assessing the response to the pandemic for years to come. In the meantime, for scientists interested in getting involved in policymaking, there are lessons to be learned, as well some preliminary insights from the pandemic that may help to improve interactions between scientists and policymakers and thereby pave the way to better evidence-based policy.
Cultural divisions between scientists and policymakers
Even in the absence of a public-health emergency, there are several obstacles to the smooth implementation of scientific advice into policy. One is simply that scientists and policymakers are generally beholden to different incentive systems. “Classically, a scientist wants to understand something for the sake of understanding, because they have a passion toward that topic—so discovery is driven by the value of discovery,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Whereas the policymaker has a much more utilitarian approach. . . . They have to come up with interventions that produce the best outcomes for the most people.”
Scientists and policymakers are operating on considerably different timescales, too. “Normally, research programs take months and years, whereas policy decisions take weeks and months, sometimes days,” Tyler says. “This discrepancy makes it much more difficult to get scientifically generated knowledge into the policymaking process.” Tyler adds that the two groups deal with uncertainty in very different ways: academics are comfortable with it, as measuring uncertainty is part of the scientific process, whereas policymakers tend to view it as something that can cloud what a “right” answer might be.
This cultural mismatch has been particularly pronounced during the COVID-19 pandemic. Even as scientists work at breakneck speeds, many crucial questions about COVID-19—such as how long immunity to the virus lasts, and how much of a role children play in the spread of infection—remain unresolved, and policy decisions have had to be addressed with limited evidence, with advice changing as new research emerges.
“We have seen the messy side of science, [that] not all studies are equally well-done and that they build over time to contribute to the weight of knowledge,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. “The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions.”
Academia has done an amazing job of trying to get as much information relevant to COVID gathered and distributed into the policymaking process as possible. —Chris Tyler, University College London
Widespread mask use, for example, was initially discouraged by many politicians and public health officials due to concerns about a shortage of supplies for healthcare workers and limited data on whether mask use by the general public would help reduce the spread of the virus. At the time, there were few mask-wearing laws outside of East Asia, where such practices were commonplace long before the COVID-19 pandemic began.
Gradually, however, as studies began to provide evidence to support the use of face coverings as a means of stemming transmission, scientists and public health officials started to recommend their use. This shift led local, state, and federal officials around the world to implement mandatory mask-wearing rules in certain public spaces. Some politicians, however, used this about-face in advice as a reason to criticize health experts.
“We’re dealing with evidence that is changing very rapidly,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “I think there’s a risk of people perceiving that rapid evolution as science [being] a bad process, which is worrisome.” On the other hand, the spotlight the pandemic has put on scientists provides opportunities to educate the general public and policymakers about the scientific process, Azad adds. It’s important to help them understand that “it’s good that things are changing, because it means we’re paying attention to the new evidence as it comes out.”
Bringing science and policy closer together
Despite these challenges, science and policy experts say that there are both short- and long-term ways to improve the relationship between the two communities and to help policymakers arrive at decisions that are more evidence-based.
Better tools, for one, could help close the gap. Earlier this year, Ruggeri brought together a group of people from a range of disciplines, including medicine, engineering, economics, and policy, to develop the Theoretical, Empirical, Applicable, Replicable, Impact (THEARI) rating system, a five-tiered framework for evaluating the robustness of scientific evidence in the context of policy decisions. The ratings range from “theoretical” (the lowest level, where a scientifically viable idea has been proposed but not tested) to “impact” (the highest level, in which a concept has been successfully tested, replicated, applied, and validated in the real world).
The team developed THEARI partly to establish a “common language” across scientific disciplines, which Ruggeri says would be particularly useful to policymakers evaluating evidence from a field they may know little about. Ruggeri hopes to see the THEARI framework—or something like it—adopted by policymakers and policy advisors, and even by journals and preprint servers. “I don’t necessarily think [THEARI] will be used right away,” he says. “It’d be great if it was, but we . . . [developed] it as kind of a starting point.”
Other approaches to improve the communication between scientists and policymakers may require more resources and time. According to Akerlof, one method could include providing better incentives for both parties to engage with each other—by offering increased funding for academics who take part in this kind of activity, for instance—and boosting opportunities for such interactions to happen.
Akerlof points to the American Association for the Advancement of Science’s Science & Technology Policy Fellowships, which place scientists and engineers in various branches of the US government for a year, as an example of a way in which important ties between the two communities could be forged. “Many of those scientists either stay in government or continue to work in science policy in other organizations,” Akerlof says. “By understanding the language and culture of both the scientific and policy communities, they are able to bridge between them.”
In Canada, such a program was established in 2018, when the Canadian Science Policy Center and Mona Nemer, Canada’s Chief Science Advisor, held the country’s first “Science Meets Parliament” event. The 28 scientists in attendance, including Azad, spent two days learning about effective communication and the policymaking process, and interacting with senators and members of parliament. “It was eye opening for me because I didn’t know how parliamentarians really live and work,” Azad says. “We hope it’ll grow and involve more scientists and continue on an annual basis . . . and also happen at the provincial level.”
The short timeframes needed for COVID-19 decisions have run straight into the much longer timeframes needed for robust scientific conclusions. —Karen Akerlof, George Mason University
There may also be insights from scientist-policymaker exchanges in other domains that experts can apply to the current pandemic. Maria Carmen Lemos, a social scientist focused on climate policy at the University of Michigan, says that one way to make those interactions more productive is by closing something she calls the “usability gap.”
“The usability gap highlights the fact that one of the reasons that research fails to connect is because [scientists] only pay attention to the [science],” Lemos explains. “We are putting everything out there in papers, in policy briefs, in reports, but rarely do we actually systematically and intentionally try to understand who is on the other side” receiving this information, and what they will do with it.
The way to deal with this usability gap, according to Lemos, is for more scientists to consult the people who actually make, influence, and implement policy changes early on in the scientific process. Lemos and her team, for example, have engaged in this way with city officials, farmers, forest managers, tribal leaders, and others whose decision making would directly benefit from their work. “We help with organization and funding, and we also work with them very closely to produce climate information that is tailored for them, for the problems that they are trying to solve,” she adds.
Azad applied this kind of approach in a study that involves assessing the effects of the pandemic on a cohort of children that her team has been following from infancy, starting in 2010. When she and her colleagues were putting together the proposal for the COVID-19 project this year, they reached out to public health decision makers across the Canadian provinces to find out what information would be most useful. “We have made sure to embed those decision makers in the project from the very beginning to ensure we’re asking the right questions, getting the most useful information, and getting it back to them in a very quick turnaround manner,” Azad says.
There will also likely be lessons to take away from the pandemic in the years to come, notes Noam Obermeister, a PhD student studying science policy at the University of Cambridge. These include insights from scientific advisors about how providing guidance to policymakers during COVID-19 compared to pre-pandemic times, and how scientists’ prominent role during the pandemic has affected how they are viewed by the public; efforts to collect this sort of information are already underway.
“I don’t think scientists anticipated that much power and visibility, or that [they] would be in [public] saying science is complicated and uncertain,” Obermeister says. “I think what that does to the authority of science in the public eye is still to be determined.”
Talking Science to PolicymakersFor academics who have never engaged with policymakers, the thought of making contact may be daunting. Researchers with experience of these interactions share their tips for success. 1. Do your homework. Policymakers usually have many different people vying for their time and attention. When you get a meeting, make sure you make the most of it. “Find out which issues related to your research are a priority for the policymaker and which decisions are on the horizon,” says Karen Akerlof, a professor of environmental science and policy at George Mason University. 2. Get to the point, but don’t oversimplify. “I find policymakers tend to know a lot about the topics they work on, and when they don’t, they know what to ask about,” says Kai Ruggeri, a professor of health policy and management at Columbia University. “Finding a good balance in the communication goes a long way.” 3. Keep in mind that policymakers’ expertise differs from that of scientists. “Park your ego at the door and treat policymakers and their staff with respect,” Akerlof says. “Recognize that the skills, knowledge, and culture that translate to success in policy may seem very different than those in academia.” 4. Be persistent. “Don’t be discouraged if you don’t get a response immediately, or if promising communications don’t pan out,” says Meghan Azad, a professor of pediatrics at the University of Manitoba. “Policymakers are busy and their attention shifts rapidly. Meetings get cancelled. It’s not personal. Keep trying.” 5. Remember that not all policymakers are politicians, and vice versa. Politicians are usually elected and are affiliated with a political party, and they may not always be directly involved in creating new policies. This is not the case for the vast majority of policymakers—most are career civil servants whose decisions impact the daily living of constituents, Ruggeri explains.
A grant to a New York nonprofit aimed at detecting and preventing future outbreaks of coronaviruses from bats has been canceled by the National Institutes of Health, Politico reports, apparently at the direction of President Donald Trump because the research involved the Wuhan Institute of Virology in China. The virology institute has become a focal point for the idea that SARS-CoV-2 escaped from the laboratory and caused the current COVID-19 pandemic, a scenario experts say is not supported by evidence. Instead, virologists The Scientist has spoken to say the virus most likely jumped from infected animals to humans.
The grant, first awarded in fiscal year 2014 and most recently renewed last year, went to EcoHealth Alliance, which describes itself as “a global environmental health nonprofit organization dedicated to protecting wildlife and public health from the emergence of disease.” The aims of the funded project included characterizing coronaviruses present in bat populations in southern China and conducting surveillance to detect spillover events of such viruses to people. The project has resulted in 20 publications, most recently a March report on zoonotic risk factors in rural southern China.
EcoHealth Alliance’s partners on the project include researchers at the Wuhan Institute of Virology, a BSL-4 facility that has for months been a focus of conspiracy theories that SARS-CoV-2 escaped or was released from a lab. On April 14, the The Washington Post published a column highlighting State Department cables about concerns regarding safety at the institute. (Experts tell NPR that, even in light of the cables, accidental escape of the virus from a lab remains a far less likely scenario than a jump from animals.)
Then, in an April 17 White House coronavirus briefing, a reporter, whom Politico identifies as being from Newsmax, falsely stated in a question that “US intelligence is saying this week that the coronavirus likely came from a level 4 lab in Wuhan,” and that the NIH had awarded a $3.7 million grant to the Wuhan lab. “Why would the US give a grant like that to China?” she asked. “We will end that grant very quickly,” Trump said in his answer.
An NIH official then wrote to EcoHealth Alliance to inquire about money sent to “China-based participants in this work,” Politico reports, and the organization’s head, Peter Daszak, responded that a complete response would take time, but that “I can categorically state that no fund from [the grant] have been sent to the Wuhan Institute of Virology, nor has any contract been signed.” Days later, NIH notified EcoHealth Alliance that future funding for the project was canceled, and that it must immediately “stop spending the $369,819 remaining from its 2020 grant”—an unusual move generally reserved for cases of scientific misconduct or financial improprieties, according to Politico.
In a statement about the cancellation, EcoHealth Alliance says the terminated research “aimed to analyze the risk of coronavirus emergence and help in designing vaccines and drugs to protect us from COVID-19 and other coronavirus threats,” and that it addresses “all four strategic research priorities of the NIH/NIAID Strategic Plan for COVID-19 Research, released just this week.” The organization will, it says, “continue our fight against this and other emerging diseases.”
Congressional Committee tweets don’t usually get much attention. But when the House Committee on Science, Space, and Technology sent out a link to a Breitbart story claiming a “plunge” in global temperatures, people took notice. The takedowns flew in, from Slateand Bernie Sanders, from plenty of scientists, and most notably from the Weather Channel, which deemed Breitbart’s use of their meteorologist’s face worthy of a point-by-point debunking video.
There is nothing particularly noteworthy about Breitbart screwing up climate science, but the House Science Committee is among the most important scientific oversight bodies in the country. Since Texas Republican Lamar Smith took over its leadership in 2012, the Committee has spiraled down an increasingly anti-science rabbit hole: absurd hearings aimed at debunking consensus on global warming, outright witch hunts using the Committee’s subpoena power to intimidate scientists, and a Republican membership that includes some of the most anti-science lawmakers in the land.
The GOP’s shenanigans get the headlines, but what about the other side of the aisle? What is it like to be a member of Congress and sit on a science committee that doesn’t seem to understand science? What is it like to be an adult in a room full of toddlers? I asked some of the adults.
“I think it’s completely embarrassing,” said Mark Veasey, who represents Texas’s 33rd district, including parts of Dallas and Fort Worth. “You’re talking about something that 99.9 percent—if not 100 percent—of people in the legitimate science community says is a threat….To quote Breitbart over some of the most brilliant people in the world—and those are American scientists—and how they see climate change, I just think it’s a total embarrassment.”
Paul Tonko, who represents a chunk of upstate New York that includes Albany, has also called it embarrassing. “It is frustrating when you have the majority party of a committee pushing junk science and disproven myths to serve a political agenda,” he said. “It’s not just beneath the dignity of the Science Committee or Congress as a whole, it’s inherently dangerous. Science and research seek the truth—they don’t always fit so neatly with agendas.”
“I think it’s completely embarrassing.”
Suzanne Bonamici, of Oregon’s 1st District, also called it frustrating “to say the least” that the Committee “is spending time questioning climate researchers and ignoring the broad scientific consensus.” California Rep. Eric Swalwellcalled it the “Science” Committee in an email, and made sure I noted the air quotes. He said that in Obama’s first term, the Committee helped push forward on climate change and a green economy. “For the last four years, however, being on the Committee has meant defending the progress we’ve made.”
Frustration, embarrassment, a sense of Sisyphean hopelessness—this sounds like a grim gig. And Veasey also said that he doesn’t have much hope for a change in the Science Committee’s direction, because that change would have to come from the chairman. Smith has received hundreds of thousands of dollars in campaign support from the oil and gas industry over the years, and somehow finds himself in even greater climate change denial than ExxonMobil.
And of course, it isn’t just the leadership. The League of Conservation Voters maintains a scorecard of every legislator in Congress: for 2015, the most recent year available, the average of all the Democratic members on the science committee is 92.75 percent (with 100 being a perfect environment-friendly score). On the GOP side of the aisle, the average is just over three percent.
(I reached out to a smattering of GOP members of the Committee to get their take on its recent direction. None of them responded.)
Bill Foster, who represents a district including some suburbs of Chicago, is the only science PhD in all of Congress (“I very often feel lonely,” he said, before encouraging other scientists to run for office). “Since I made the transition from science into politics not so long ago, I’ve become very cognizant of the difference between scientific facts, and political facts,” he said. “Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.” Witness the 52 percent of Republicans who currently believe Trump won the popular vote, and you get the idea.
I’m not sure “climate change isn’t happening” has reached that “political fact” level, though Smith and his ilk have done their damndest. Recent polls suggest most Americans do understand the issue, and more and more they believe the government should act aggressively to tackle it.
“Political facts can be established by repeating over and over something that is demonstrably false, then if it comes to be accepted by enough people it becomes a political fact.”
That those in charge of our government disagree so publicly and strongly now has scientists terrified. “This has a high profile,” Foster said, “because if there is any committee in Congress that should operate on the basis of scientific truth, it ought to be the Science, Space, and Technology committee—so when it goes off the rails, then people notice.”
The odds of the train jumping back on the rails over the next four years appear slim. Policies that came from the Obama White House, like the Clean Power Plan, are obviously on thin ice with a Trump administration, and without any sort of check on Smith and company it is hard to say just how pro-fossil fuel, anti-climate the committee could really get.
In the face of all that, what is a sane member of Congress to do? Elizabeth Esty, who represents Connecticut’s 5th district, was among several Committee members to note that in spite of the disagreements on climate, she has managed to work with GOP leadership on other scientific issues. Rep. Swalwell said he will try and focus on bits of common ground, like the jobs that come with an expanding green economy. Rep. Veasey said his best hope is that some strong conservative voices from outside of Congress might start to make themselves heard by the Party’s upper echelons on climate and related issues.
An ugly and dire scenario, then, but the Democrats all seem to carry at least a glimmer of hope. “It’s certainly frustrating and concerning but I’m an optimist,” Esty said. “I wouldn’t run for this job if I weren’t.”
Dave Levitan is a science journalist, and author of the book Not A Scientist: How politicians mistake, misrepresent, and utterly mangle science. Find him on Twitter and at his website.
After all, that’s what we learned from the bankruptcy filings of two other major U.S. coalcompanies, Arch Coal and Alpha Natural Resources. The companies’ lists of creditors accompanying their chapter 11 bankruptcy filings both cited known climate science deniers. So far, the bankruptcy cases have not revealed the details of these financial relationships. But there is now no doubt the coal companies contracted with these groups and individuals to either make a donation or pay for services.
Recent bankruptcy filings have revealed that Chris Horner, who regularly derides climate science on Fox News Channel, has financial ties to the coal industry.
This new evidence is important at a time when coal and oil and gas companies are under increased scrutiny about their ongoing climate science disinformation campaigns. ExxonMobil, for example, currently faces state and possibly federal investigations into whether the discrepancies between what the company knew about climate science and what it told their shareholders and the public amounted to fraud.
Of course, there’s no shortage of historical evidence of the coal industry’s track record of deceiving the public about global warming. In 1991, for example, coal trade associations formed a short-lived front group called the Information Council on the Environment that ran a national public relations campaign downplaying the known risks of climate change. All through the 1990s, coal trade groups also were members of the Global Climate Coalition, an alliance of companies and business groups that disputed the findings of the U.N. Intergovernmental Panel on Climate Change (IPCC) and, later on, helped scuttle the Kyoto Protocol climate treaty. And, more recently, the American Coalition for Clean Coal Electricity paid a lobbying firm to send forged letters to members of Congress from actual nonprofit groups, including the NAACP and the American Association of University Women, espousing fabricated opposition to a 2009 climate change bill.
But such coal company connections have been harder to pin down in the current era of so-called dark money. That’s what makes the latest disclosures so noteworthy: They indicate that coal industry disinformation campaigns have continued even as the scientific evidence that burning fossil fuels is driving climate change has only become stronger.
Revealing Creditor Lists
The creditor list for Alpha Natural Resources—which filed for bankruptcy last August—indicates that the company has been especially active in supporting the denier network. As first reported by The Intercept, Alpha—the fourth largest U.S. coal company—has financial ties with a half dozen denier organizations, some which have direct links to billionaire brothers Charles and David Koch, owners of the coal, oil and gas conglomerate Koch Industries. The Koch-affiliated groups include Americans for Prosperity, the Institute for Energy Research and Freedom Partners Chamber of Commerce, a de facto Koch bank that disburses donations from anonymous, wealthy conservatives to groups that advocate rolling back public health, environmental and workplace protections.
Other Alpha creditors include the U.S. Chamber of Commerce, which questions the legitimacy of climate models; the Heartland Institute, which is probably best known for its billboard likening climate scientists to the serial killer Ted Kaczynski; and the American Legislative Exchange Council (ALEC), which convenes conferences for its state legislator members featuring speakers who distort climate science and disparage renewable energy. One of the speakers at a summer 2014 ALEC conference, for example, was Heartland Institute President Joe Bast, whose slide presentation falsely claimed: “There is no scientific consensus on the human role in climate change” and “The Intergovernmental Panel on Climate Change … is not a credible source of science or economics.”
The Alpha creditor list also includes at least two individuals with links to denier groups. Particularly noteworthy is Chris Horner, an attorney who is closely associated with a number of nonprofit denier groups, including ALEC, the Competitive Enterprise Institute (CEI), the Heartland Institute, the Energy & Environmental Legal Institute (E&E Legal), formerly the American Tradition Institute, and the Free Market Environmental Law Clinic, another Alpha creditor.
Arch Coal, the second largest U.S. coal company, listed ALEC and E&E Legal in its list of creditors when it filed for chapter 11 protection in January. Just last month, the Wall Street Journal reported that the company donated $10,000 to E&E Legal in 2014. E&E Legal’s executive director, Craig Richardson, told the Journal the contribution was for “general support.”
Chris Horner’s Coal Ties Disclosed
The exposure of Horner’s financial ties to coal companies is significant because he is a regular guest on Fox News Channel, which identifies him by his affiliation with CEI or E&E Legal but not by his connection to the coal industry.
Despite his lack of scientific expertise, Horner routinely critiques scientific findings, has called for spurious investigations of climate scientists affiliated with the IPCC and the National Aeronautics and Space Administration and has harassed scientists by filing intrusive open records requests with the universities where they work. As legal counsel for the Energy & Environmental Legal Institute and the Free Market Environmental Law Clinic—which work in tandem—Horner has targeted a number of leading climate scientists, including James Hansenand Katharine Hayhoe. Perhaps his most notorious lawsuit was against the University of Virginia to obtain emails, draft research papers, handwritten notes and other documents related to the work of Michael Mann, lead author of the famous “hockey stick” study demonstrating the link between increased fossil fuel use and rising global temperatures. The Virginia Supreme Court ultimately ruled in favor of the university and Mann, affirming the school’s right to protect the privacy of its researchers from overly broad open records requests.
According to the Wall Street Journal, Alpha paid Horner $18,600 before it declared bankruptcy. Meanwhile, the Free Market Environmental Law Clinic—an Alpha creditor—paid him $110,000 in 2014, $115,865 in 2013 and $60,449 in 2012, according to the clinic’s tax filings.
Besides Alpha and Arch Coal, Horner has ties to other coal companies. Last summer, he was a featured speaker at a private $7,500-a-person golf and fly-fishing retreat sponsored by Alpha, Arch Coal and four other coal companies: Alliance Resource Partners, Consol Energy, Drummond and United Coal. After the event—the 2015 annual Coal & Investment Leadership Forum—attendees received an email from the coal company CEOs praising Horner, according to the Center for Media and Democracy, a nonpartisan political watchdog group that first reported the connection between Arch Coal and E&E Legal. “As the ‘war on coal’ continues,” the email stated, “I trust that the commitment we have made to support Chris Horner’s work will eventually create a greater awareness of the illegal tactics being employed to pass laws that are intended to destroy our industry.”
Given the recent spate of bankruptcies, the companies’ commitment to Horner likely will create a greater awareness of something quite different: that the coal industry—along with the likes of ExxonMobil and Koch Industries—is still funding denier groups to spread disinformation about climate science and delay government action. It is time we held these companies accountable.
Na semana que passou, a Funceme atualizou a previsão para a estação de chuva, que se estende até maio na região em que o Ceará está inserido. Reafirmou, em dia de chuva intensa na Capital, probabilidade de chuva em torno de 70% abaixo da média.
Isso é seca braba. É caso de se cobrar atitude do poder público e se compromissar com mobilização social para um cenário desfavorável.
Pela primeira vez, o volume do Castanhão, principal fornecedor da água na Região Metropolitana de Fortaleza, caiu a menos de 10%.
Mas a reação, de um modo geral, se restringe ao ceticismo em relação às previsões da Funceme. Não faltam comentários pejorativos, piadas e ironias, uma espécie de cultura instaurada sempre que se trata da instituição que, além da meteorologia, se dedica a meio ambiente e recursos hídricos.
Penso que há de se atribuir essa postura a imprecisões de previsão, como de fato acontecem, ao uso político de informações como aconteceu no passado ou mesmo à ignorância. Mas me incomoda. A meteorologia lida com parâmetros globais complexos, como temperatura do ar e dos oceanos, velocidade e direção dos ventos, umidade, pressão atmosférica, fenômenos como El Niño… Já avançou consideravelmente na confiabilidade das previsões feitas por meteorologistas, com o uso de dados de satélites, balões atmosféricos e um tanto mais de aparato tecnológico que alimentam modelos matemáticos complicados para desenhar probabilidades, mas não exatidões.
Erra-se, aqui como no resto mundo. Mas geram-se informações de profundo impacto social, econômico, científico e cultural, essenciais a tomadas de decisões, de natureza pública e privada. Algo que nenhum gestor ou comunidade pode dispensar, especialmente em uma região como a nossa, vulnerável às variações climáticas e dependente da chuva. Carecemos de uma troca de mentalidade em relação ao trabalho da Funceme. Falo de respeito mesmo pelo que nos é caro e fundamentalmente necessário.
A propósito, é difícil, mas torço para que a natureza contrarie o prognóstico e caia chuva capaz de garantir um mínimo de segurança hídrica, produtividade e dignidade a um Ceará que muito depende das informações sobre o clima, geradas pela Funceme.
Lei de Biossegurança completa 10 anos dialogando com as mais recentes descobertas da ciência
Walter Colli – Instituto de Química, Universidade de São Paulo
Ao longo de 2015, uma silenciosa revolução biotecnológica aconteceu no Brasil. Neste ano a Comissão Técnica Nacional de Biossegurança (CTNBio) analisou e aprovou um número recorde de tecnologias aplicáveis à agricultura, medicina e produção de energia. O trabalho criterioso dos membros da CTNBio avaliou como seguros para a saúde humana e animal e para o ambiente 19 novos transgênicos, dentre os quais 13 plantas, três vacinas e três microrganismos ou derivados.
A CTNBio, priorizando o rigor nas análises de biossegurança e atenta às necessidades de produzir alimentos de maneira mais sustentável aprovou, no ano passado, variedades de soja, milho e algodão tolerantes a herbicidas com diferentes métodos de ação. Isso permitirá que as sementes desenvolvam todo seu potencial e que os produtores brasileiros tenham mais uma opção para a rotação de tecnologias no manejo de plantas daninhas. Sem essa ferramenta tecnológica, os agricultores ficariam reféns das limitações impostas pelas plantas invasoras. As tecnologias de resistência a insetos proporcionam benefícios semelhantes.
Na área da saúde, a revolução diz respeito aos métodos de combate a doenças que são endêmicas das regiões tropicais. Mais uma vez, mostrando-se parceira da sociedade, a CTNBio avaliou a biossegurança de duas vacinas recombinantes contra a Dengue em regime de urgência e deu parecer favorável a elas. Soma-se a estes esforços a aprovação do Aedes aegypti transgênico. O mosquito geneticamente modificado aprovado em 2014 tem se mostrado um aliado no combate ao inseto que, além de ser vetor da dengue, também está associado a casos de transmissão dos vírus Zika, Chikungunya e da febre amarela.
Nos últimos 10 anos, até o momento, o advento da nova CTNBio pela Lei 11.105 de 2005 – a Lei de Biossegurança – proporcionou a aprovação comercial de 82 Organismos Geneticamente Modificados (OGM): 52 eventos em plantas; 20 vacinas veterinárias; 7 microrganismos; 1 mosquito Aedes aegypti; e 2 vacinas para uso humano contra a Dengue. Essas liberações comerciais são a maior prova de que o Brasil lança mão da inovação para encontrar soluções para os desafios da contemporaneidade.
Entretanto, é necessário enfatizar que assuntos não relacionados com Ciência também se colocaram, como em anos anteriores, no caminho do desenvolvimento da biotecnologia em 2015. Manifestantes anti-ciência invadiram laboratórios e destruíram sete anos de pesquisas com plantas transgênicas de eucalipto e grupos anti-OGM chegaram a interromper reuniões da CTNBio, pondo abaixo portas com ações truculentas. Diversas inverdades foram publicadas na tentativa de colocar em dúvida a segurança e as contribuições que a transgenia vem dando para a sociedade. A ação desses grupos preocupa, pois, se sua ideologia for vitoriosa, tanto o progresso científico quanto o PIB brasileiros ficarão irreversivelmente prejudicados.
Hoje, a nossa Lei de Biossegurança é tida internacionalmente como um modelo de equilíbrio entre o rigor nas análises técnicas e a previsibilidade institucional necessária para haver o investimento. O reconhecimento global, o diálogo com a sociedade e a legitimidade dos critérios técnicos mostram que esses 10 anos são apenas o início de uma longa história de desenvolvimento e inovação no Brasil.
Pandora’s box: how GM mosquitos could have caused Brazil’s microcephaly disaster (The Ecologist)
1st February 2016
Aedes Aegypti mosquito feeding on human blood. This is the species that transmits Zika, and that was genetically engineered by Oxitec using the piggyBac transposon. Photo: James Gathany via jentavery on Flickr (CC BY).
In Brazil’s microcephaly epidemic, one vital question remains unanswered: how did the Zika virus suddenly learn how to disrupt the development of human embryos? The answer may lie in a sequence of ‘jumping DNA’ used to engineer the virus’s mosquito vector – and released into the wild four years ago in the precise area of Brazil where the microcephaly crisis is most acute.
These ‘promiscuous’ transposons have found special favour with genetic engineers, whose goal is to create ‘universal’ systems for transferring genes into any and every species on earth. Almost none of the geneticists has considered the hazards involved.
Since August 2015, a large number of babies in Northeast Brazil have been born with very small heads, a condition known as microcephaly, and with other serious malformations. 4,180 suspected cases have been reported.
Epidemiologists have found a convincing correlation between the incidence of the natal deformities and maternal infections with the Zika virus, first discovered in Uganda’s Zika Valley in 1947, which normally produces non-serious illness.
The correlation has been evidenced through the geographical distrubution of Zika infections and the wave of deformities. Zika virus has also been detected in the amniotic fluids and other tissues of the affected babies and their mothers.
This latter finding was recently reported by AS Oliveira Melo et al in a scientific paperpublished in the journal Ultrasound in Obstetrics & Gynecology, which noted evidence of intra-uterine infection. They also warn:
“As with other intrauterine infections, it is possible that the reported cases of microcephaly represent only the more severely affected children and that newborns with less severe disease, affecting not only the brain but also other organs, have not yet been diagnosed.”
The Brazilian Health Minister, Marcelo Castro, says he has “100% certainty” that there is a link between Zika and microcephaly. His view is supported by the medical community worldwide, including by the US Center for Disease Control.
Oliveira Melo et al draw attention to a mystery that lies at the heart of the affair: “It is difficult to explain why there have been no fetal cases of Zika virus infection reported until now but this may be due to the underreporting of cases, possible early acquisition of immunity in endemic areas or due to the rarity of the disease until now.
“As genomic changes in the virus have been reported, the possibility of a new, more virulent, strain needs to be considered. Until more cases are diagnosed and histopathological proof is obtained, the possibility of other etiologies cannot be ruled out.”
And this is the key question: how – if indeed Zika really is the problem, as appears likely – did this relatively innocuous virus acquire the ability to produce these terrible malformations in unborn human babies?
Oxitec’s GM mosquitoes
An excellent article by Claire Bernish published last week on AntiMedia draws attention to an interesting aspect of the matter which has escaped mainstream media attention: the correlation between the incidence of Zika and the area of release of genetically modified Aedes aegypti mosquitos engineered for male insterility (see maps, above right).
The purpose of the release was to see if it controlled population of the mosquitos, which are the vector of Dengue fever, a potentially lethal disease. The same species also transmits the Zika virus.
The releases took in 2011 and 2012 in the Itaberaba suburb of the city of Juazeiro, Bahia, Northeast Brazil, about 500 km west of ther coastal city of Recife. The experiment was written up in July 2015 in the journal PLOS Neglected Tropical Diseases in a paper titled ‘Suppression of a Field Population of Aedes aegypti in Brazil by Sustained Release of Transgenic Male Mosquitoes’ by Danilo O. Carvalho et al.
An initial ‘rangefinder of 30,000 GM mosquitos per week took place between 19th May and 29th June 2011, followed by a much larger release of 540,000 per week in early 2012, ending on 11th February.
At the end of it the scientists claimed “effective control of a wild population of Ae. aegypti by sustained releases of OX513A male Ae. aegypti. We diminished Ae. aegypti population by 95% (95% CI: 92.2%-97.5%) based on adult trap data and 78% (95% CI: 70.5%-84.8%) based on ovitrap indices compared to the adjacent no-release control area.”
So what’s to worry about?
The idea of the Oxitec mosquitoes is simple enough: the males produce non-viable offspring which all die. So the GM mosqitoes are ‘self-extinguishing’ and the altered genes cannot survive in the wild population. All very clever, and nothing to worry about!
The genetic engineerig method employed by Oxitec allows the popular antibiotic tetracycline to be used to repress the lethality during breeding. But as a side-effect, the lethality is also reduced by the presence of tetracycline in the environment; and as Bernish points out, Brazil is among the world’s biggest users of anti-microbials including tetracycline in its commercial farming sector:
“As a study by the American Society of Agronomy, et. al., explained, ‘It is estimated that approximately 75% of antibiotics are not absorbed by animals and are excreted in waste.’ One of the antibiotics (or antimicrobials) specifically named in that report for its environmental persistence is tetracycline.
In fact, as a confidential internal Oxitec document divulged in 2012, that survival rate could be as high as 15% – even with low levels of tetracycline present. ‘Even small amounts of tetracycline can repress’ the engineered lethality. Indeed, that 15% survival rate was described by Oxitec.”
She then quotes the leaked Oxitec paper: “After a lot of testing and comparing experimental design, it was found that [researchers] had used a cat food to feed the [OX513A] larvae and this cat food contained chicken. It is known that tetracycline is routinely used to prevent infections in chickens, especially in the cheap, mass produced, chicken used for animal food. The chicken is heat-treated before being used, but this does not remove all the tetracycline. This meant that a small amount of tetracycline was being added from the food to the larvae and repressing the [designed] lethal system.”
So in other words, there is every possibility for Oxitec’s modified genes to persist in wild populations of Aedes aegypti mosquitos, especially in the environmental presence of tetracycline which is widely present in sewage, septic tanks, contaminated water sources and farm runoff.
‘Promiscuous’ jumping genes
On the face of it, there is no obvious way in which the spread of Oxitec’s GM mosquitos into the wild could have anything to do with Brazil’s wave of micrcophaly. Is there?
Actually, yes. The problem may arise from the use of the ‘transposon’ (‘jumping’ sequence of DNA used in the genetic engineering process to introduce the new genes into the target organism). There are several such DNA sequences in use, and one of the most popular is known as known as piggyBac.
As a 2001 review article by Dr Mae Wan Ho shows, piggyBac is notoriously active, inserting itself into genes way beyond its intended target: “These ‘promiscuous’ transposons have found special favour with genetic engineers, whose goal is to create ‘universal’ systems for transferring genes into any and every species on earth. Almost none of the geneticists has considered the hazards involved …
“It would seem obvious that integrated transposon vectors may easily jump out again, to another site in the same genome, or to the genome of unrelated species. There are already signs of that in the transposon, piggyBac, used in the GM bollworms to be released by the USDA this summer.
The piggyBac transposon was discovered in cell cultures of the moth Trichopulsia, the cabbage looper, where it caused high rates of mutations in the baculovirus infecting the cells by jumping into its genes … This transposon was later found to be active in a wide range of species, including the fruitfly Drosophila, the mosquito transmitting yellow fever, Aedes aegypti, the medfly, Ceratitis capitata, and the original host, the cabbage looper.
“The piggyBac vector gave high frequencies of transpositions, 37 times higher than mariner and nearly four times higher than Hirmar.”
In a later 2014 report Dr Mae Wan Ho returned to the theme with additional detail and fresh scientific evidence (please refer to her original article for references): “The piggyBac transposon was discovered in cell cultures of the moth Trichopulsia, the cabbage looper, where it caused high rates of mutations in the baculovirus infecting the cells by jumping into its genes …
“There is also evidence that the disabled piggyBac vector carrying the transgene, even when stripped down to the bare minimum of the border repeats, was nevertheless able to replicate and spread, because the transposase enzyme enabling the piggyBac inserts to move can be provided by transposons present in all genomes.
“The main reason initially for using transposons as vectors in insect control was precisely because they can spread the transgenes rapidly by ‘non-Mendelian’ means within a population, i.e., by replicating copies and jumping into genomes, thereby ‘driving’ the trait through the insect population. However, the scientists involved neglected the fact that the transposons could also jump into the genomes of the mammalian hosts including human beings …
“In spite of instability and resulting genotoxicity, the piggyBac transposon has been used extensively also in human gene therapy. Several human cell lines have been transformed, even primary human T cells using piggyBac. These findings leave us little doubt that the transposon-borne transgenes in the transgenic mosquito can transfer horizontally to human cells. The piggyBac transposon was found to induce genome wide insertionmutations disrupting many gene functions.”
Has the GM nightmare finally come true?
So down to the key question: was the Oxitec’s GM Aedes aegypti male-sterile mosquito released in Juazeiro engineered with the piggyBac transposon? Yes, it was. And that creates a highly significant possibility: that Oxitec’s release of its GM mosquitos led directly to the development of Brazil’s microcephaly epidemic through the following mechanism:
1. Many of the millions of Oxitec GM mosquitos released in Juazeiro in 2011/2012 survive, assisted, but not dependent on, the presence of tetracycline in the environment.
2. These mosquitos interbreed with with the wild population and their novel genes become widespread.
3. The promiscuous piggyBac transposon now present in the local Aedes aegyptipopulation takes the opportunity to jump into the Zika virus, probably on numerous occasions.
4. In the process certain mutated strains of Zika acquire a selective advantage, making them more virulent and giving them an enhanced ability to enter and disrupt human DNA.
5. One way in which this manifests is by disrupting a key stage in the development of human embryos in the womb, causing microcephaly and the other reported deformations. Note that as Melo Oliveira et al warn, there are almost certainly other manifestations that have not yet been detected.
6. It may be that the piggyBac transposon has itself entered the DNA of babies exposed in utero to the modified Zika virus. Indeed, this may form part of the mechanism by which embryonic development is disrupted.
In the latter case, one implication is that the action of the gene could be blocked by giving pregnant women tetracycline in order to block its activity. The chances of success are probably low, but it has to be worth trying.
No further releases of GM insects!
While I am certainly not claiming that this is what actually took place, it is at least a credible hypothesis, and moreover a highly testable one. Nothing would be easier for genetic engineers than to test amniotic fluids, babies’ blood, wild Aedes mosquitos and the Zika virus itself for the presence of the piggyBac transposon, using well established and highly sensitive PCR (polymerase chain reaction) techniques.
If this proves to be the case, those urging caution on the release of GMOs generally, and transgenic insects bearing promiscuous transposons in particular, will have been proved right on all counts.
But most important, such experiments, and any deployment of similar GM insects, must be immediately halted until the possibilities outlined above can be safely ruled out. There are plans, for example, to release similarly modified Anopheles mosquitos as an anti-malarial measure.
There are also calls for even more of the Oxitec Aedes aegypti mosquitos to be released in order to halt the transmission of the Zika virus. If that were to take place, it could give rise to numerous new mutations of the virus with the potential to cause even more damage to the human genome, that we can, at this stage, only guess at.
The Zika virus is a flavivirus closely related to notorious pathogens including dengue, yellow fever, Japanese encephalitis, and West Nile virus. The virus is transmitted by mosquitoes in the genus Aedes, especially A. aegypti, which is a known vector for many of Zika’s relatives. Symptoms of the infection appear three to twelve days post bite. Most people are asymptomatic, which means they show no signs of infection. The vast majority of those who do show signs of infection report fever, rash, joint pain, and conjunctivitis (red eyes), according to the U.S. Centers for Disease Control. After a week or less, the symptoms tend to go away on their own. Serious complications have occurred, but they have been extremely rare.
The Zika virus isn’t new. It was first isolated in 1947 from a Rhesus monkey in the Zika Forest in Uganda, hence the pathogen’s name. The first human cases were confirmed in Uganda and Tanzania in 1952, and by 1968, the virus had spread to Nigeria. But since then, the virus has found its way out of Africa. The first major outbreak occurred on the island of Yap in Micronesia for 13 weeks 2007, during which 185 Zika cases were suspected (49 of those were confirmed, with another 59 considered probable). Then, in October 2013, an outbreak began in French Polynesia; around 10,000 cases were reported, less than 100 of which presented with severe neurological or autoimmune complications. One confirmed case of autochthonous transmission occurred in Chile in 2014, which means a person was infected while they were in Chile rather than somewhere else. Cases were also reported that year from several Pacific Islands. The virus was detected in Chile until June 2014, but then it seemed to disappear.
Fast forward to May 2015, when the Pan American Health Organization (PAHO) issued an alert regarding the first confirmed Zika virus infection in Brazil. Since then, several thousand suspected cases of the disease and a previously unknown complication—a kind of birth defect known as microcephaly where the baby’s brain is abnormally small—have been reported from Brazil. (It’s important to note that while the connection between the virus and microcephaly is strongly suspected, the link has yet to be conclusively demonstrated.)
The recent spread of the virus has been described as “explosive”; Zika has now been detected in 25 countries and territories. The rising concern over both the number of cases and reports of serious complications has led the most affected areas in Brazil to declare a state of emergency, and on Monday, The World Health Organization’s Director-General will convene an International Health Regulations Emergency Committee on Zika virus and the observed increase in neurological disorders and neonatal malformations. At this emergency meeting, the committee will discuss mitigation strategies and decide whether the organization will officially declare the virus a “Public Health Emergency of International Concern.”
GM to the Rescue
The mosquito to blame for the outbreak—Aedes aegypti—doesn’t belong in the Americas. It’s native to Africa, and was only introduced in the new world when Europeans began to explore the globe. In the 20th century, mosquito control programs nearly eradicated the unwelcome menace from the Americas (largely thanks to the use of the controversial pesticide DDT); as late as the mid 1970s, Brazil and 15 other nations were Aedes aegypti-free. But despite the successes, eradication efforts were halted, allowing the mosquito to regain its lost territory.
Effective control measures are expensive and difficult to maintain, so at the tail end of the 20th century and into the 21st, scientists began to explore creative means of controlling mosquito populations, including the use of genetic modification. Oxitec’s mosquitoes are one of the most exciting technologies to have emerged from this period. Here’s how they work, as I described in a post almost exactly a year ago:
While these mosquitoes are genetically modified, they aren’t “cross-bred with the herpes simplex virus and E. colibacteria” (that would be an interkingdom ménage à trois!)—and no, they cannot be “used to bite people and essentially make them immune to dengue fever and chikungunya” (they aren’t carrying a vaccine!). The mosquitoes that Oxitec have designed are what scientists call “autocidal” or possess a “dominant lethal genetic system,” which is mostly fancy wording for “they die all by themselves”. The males carry inserted DNA which causes the mosquitoes to depend upon a dietary supplement that is easy to provide in the lab, but not available in nature. When the so-called mutants breed with normal females, all of the offspring require the missing dietary supplement because the suicide genes passed on from the males are genetically dominant. Thus, the offspring die before they can become adults. The idea is, if you release enough such males in an area, then the females won’t have a choice but to mate with them. That will mean there will be few to no successful offspring in the next generation, and the population is effectively controlled.
Male mosquitoes don’t bite people, so they cannot serve as transmission vectors for Zika or any other disease. As for fears that GM females will take over: less than 5% of all offspring survive in the laboratory, and as Glen Slade, director of Oxitec’s Brazilian branch notes, those are the best possible conditions for survival. “It is considered unlikely that the survival rate is anywhere near that high in the harsher field conditions since offspring reaching adulthood will have been weakened by the self-limiting gene,” he told me. And contrary to what the conspiracy theorists claim, scientists have shown that tetracycline in the environment doesn’t increase that survival rate.
Brazil, a hotspot for dengue and other such diseases, is one of the countries where Oxitec is testing their mozzies—so far, everywhere that Oxitec’s mosquitoes have been released, the local populations have been suppressed by about 90%.
Wrong Place, Wrong Time
Now that we’ve covered the background on the situation, let’s dig into the conspiracy theory. We’ll start with the main argument laid out as evidence: that the Zika outbreak began in the same location at the same time as the first Oxitec release:
Though it’s often said, it’s worth repeating: correlation doesn’t equal causation. If it did, then Nicholas Cage is to blame for people drowning (Why, Nick? WHY?). But even beyond that, there are bigger problems with this supposed correlation: even by those maps, the site of release is on the fringe of the Zika hotspot, not the center of it. Just look at the two overlaid:
The epicenter of the outbreak and the release clearly don’t line up—the epicenter is on the coast rather than inland where the map points. Furthermore, the first confirmed cases weren’t reported in that area, but in the town of Camaçari, Bahia, which is—unsurprisingly—on the coast and several hundred kilometers from the release site indicated.
But perhaps more importantly, the location on the map isn’t where the mosquitoes were released. That map points to Juazeiro de Norte, Ceará, which is a solid 300 km away from Juazeiro, Bahia—the actual site of the mosquito trial. That location is even more on the edge of the Zika-affected area:
The mistake was made initially by the Redditor who proposed the conspiracy theory and has been propagated through lazy journalistic practices by every proponent since. Here’s a quick tip: if you’re basing your conspiracy theory on location coincidence, it’s probably a good idea to actually get the location right.
By July 2015, shortly after the GM mosquitoes were first released into the wild in Juazeiro, Brazil, Oxitec proudly announced they had “successfully controlled the Aedes aegypti mosquito that spreads dengue fever, chikungunya and zika virus, by reducing the target population by more than 90%.”
A new control effort employing Oxitec mosquitoes did begin in April 2015, but not in Juaziero, or any of the northeastern states of Brazil where the disease outbreak is occurring. As another press release from Oxitec states, the 2015 releases of their GM mosquitoes were in Piracicaba, São Paulo, Brazil:
Following approval by Brazil’s National Biosafety Committee (CTNBio) for releases throughout the country, Piracicaba’s CECAP/Eldorado district became the world’s first municipality to partner directly with Oxitec and in April 2015 started releasing its self-limiting mosquitoes whose offspring do not survive. By the end of the calendar year, results had already indicated a reduction in wild mosquito larvae by 82%. Oxitec’s efficacy trials across Brazil, Panama and the Cayman Islands all resulted in a greater than 90% suppression of the wild Ae. aegypti mosquito population–an unprecedented level of control.
Based on the positive results achieved to date, the ‘Friendly Aedes aegypti Project’ in CECAP/Eldorado district covering 5,000 people has been extended for another year. Additionally, Oxitec and Piracicaba have signed a letter of intent to expand the project to an area of 35,000-60,000 residents. This geographic region includes the city’s center and was chosen due to the large flow of people commuting between it and surrounding neighborhoods which may contribute to the spread of infestations and infections.
Piracicaba, for the record, is more than 1300 miles away from the Zika epicenter:
So not only did the conspiracy theorists get the location of the first Brazil release wrong, they either got the date wrong, too, or got the location of the 2015 releases really, really off. Either way, the central argument that the release of GM mosquitoes by Oxitec coincides with the first cases of Zika virus simply doesn’t hold up.
Scientists Speak Out
As this ludicrous conspiracy theory has spread, so, too, has the scientific opposition to it. “Frankly, I’m a little sick of this kind of anti-science platform,” said vector ecologist Tanjim Hossain from the University of Miami, when I asked him what he thought. “This kind of fear mongering is not only irresponsible, but may very well be downright harmful to vulnerable populations from a global health perspective.”
Despite the specious allusions made by proponents of the conspiracy, this is still not Jurassic Park, says Hossain.
“We have a problem where ZIKV is spreading rapidly and is widely suspected of causing serious health issues,” he continued. “How do we solve this problem? An Integrated Vector Management (IVM) approach is key. We need to use all available tools, old and new, to combat the problem. GM mosquitoes are a fairly new tool in our arsenal. The way I see it, they have the potential to quickly reduce a local population of vector mosquitoes to near zero, and thereby can also reduce the risk of disease transmission. This kind of strategy could be particularly useful in a disease outbreak ‘hotspot’ because you could hypothetically stop the disease in its tracks so to speak.”
Other scientists have shared similar sentiments. Alex Perkins, a biological science professor at Notre Dame, told Business Insider that rather than causing the outbreak, GM mosquitoes might be our best chance to fight it. “It could very well be the case that genetically modified mosquitos could end up being one of the most important tools that we have to combat Zika,” Perkins said. “If anything, we should potentially be looking into using these more.”
Brazilian authorities couldn’t be happier with the results so far, and are eager to continue to fight these deadly mosquitoes by any means they can. “The initial project in CECAP/Eldorado district clearly showed that the ‘friendly Aedes aegypti solution’ made a big difference for the inhabitants of the area, helping to protect them from the mosquito that transmits dengue, Zika and chikungunya,” said Pedro Mello, secretary of health in Piracicaba. He notes that during the 2014/2015 dengue season, before the trial there began, there were 133 cases of dengue. “In 2015/2016, after the beginning of the Friendly Aedes aegypti Project, we had only one case.”
It’s long past time to stop villainizing Oxitec’s mosquitoes for crimes they didn’t commit. Claire Bernish, The Daily MFail, Mirror and everyone else who has spread these baseless accusations: I’m talking to you. The original post was in the Conspiracy subreddit—what more of a red flag for “this is wildly inaccurate bullsh*t” do you need? (After all, if this is a legit source, where are your reports on the new hidden messages in the $100 bill? or why the Illuminati wants people to believe in aliens?). It’s well known that large-scale conspiracy theories are mathematically challenged. Don’t just post whatever crap is spewed on the internet because you know it’ll get you a few clicks. It’s dishonest, dangerous, and, frankly, deplorable to treat nonsense as possible truth just to prey upon your audience’s very real fears of an emerging disease. You, with your complete lack of integrity, are maggots feeding on the decay of modern journalism, and I mean that with no disrespect to maggots.
When your reasons are worse than useless, sometimes the most rational choice is a random stab in the dark
by Michael Schulson
Illustration by Tim McDonagh
Michael Schulson is an American freelance writer. His work has appeared in Religion Dispatches, The Daily Beast, and Religion and Politics, among others. He lives in Durham, North Carolina.
We could start with birds, or we could start with Greeks. Each option has advantages.
Let’s flip a coin. Heads and it’s the Greeks, tails and it’s the birds.
In the 1970s, a young American anthropologist named Michael Dove set out for Indonesia, intending to solve an ethnographic mystery. Then a graduate student at Stanford, Dove had been reading about the Kantu’, a group of subsistence farmers who live in the tropical forests of Borneo. The Kantu’ practise the kind of shifting agriculture known to anthropologists as swidden farming, and to everyone else as slash-and-burn. Swidden farmers usually grow crops in nutrient-poor soil. They use fire to clear their fields, which they abandon at the end of each growing season.
Like other swidden farmers, the Kantu’ would establish new farming sites ever year in which to grow rice and other crops. Unlike most other swidden farmers, the Kantu’ choose where to place these fields through a ritualised form of birdwatching. They believe that certain species of bird – the Scarlet-rumped Trogon, the Rufous Piculet, and five others – are the sons-in-law of God. The appearances of these birds guide the affairs of human beings. So, in order to select a site for cultivation, a Kantu’ farmer would walk through the forest until he spotted the right combination of omen birds. And there he would clear a field and plant his crops.
Dove figured that the birds must be serving as some kind of ecological indicator. Perhaps they gravitated toward good soil, or smaller trees, or some other useful characteristic of a swidden site. After all, the Kantu’ had been using bird augury for generations, and they hadn’t starved yet. The birds, Dove assumed, had to be telling the Kantu’something about the land. But neither he, nor any other anthropologist, had any notion of what that something was.
He followed Kantu’ augurers. He watched omen birds. He measured the size of each household’s harvest. And he became more and more confused. Kantu’ augury is so intricate, so dependent on slight alterations and is-the-bird-to-my-left-or-my-right contingencies that Dove soon found there was no discernible correlation at all between Piculets and Trogons and the success of a Kantu’ crop. The augurers he was shadowing, Dove told me, ‘looked more and more like people who were rolling dice’.
Stumped, he switched dissertation topics. But the augury nagged him. He kept thinking about it for ‘a decade or two’. And then one day he realised that he had been looking at the question the wrong way all the time. Dove had been asking whether Kantu’augury imparted useful ecological information, as opposed to being random. But what if augury was useful precisely because it was random?
For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify
Tropical swidden agriculture is a fundamentally unpredictable enterprise. The success of a Kantu’ swidden depends on rainfall, pest outbreaks and river levels, among other factors. A patch of forest that might yield a good harvest in a rainy year could be unproductive in a drier year, or in a year when a certain pest spreads. And things such as pest outbreaks or the weather are pretty much impossible to predict weeks or months in the future, both for humans and for birds.
In the face of such uncertainty, though, the human tendency is to seek some kind of order – to come up with a systematic method for choosing a field site, and, in particular, to make decisions based on the conditions of the previous year.
Neither option is useful. Last year’s conditions have pretty much no bearing on events in the years ahead (a rainy July 2013 does not have any bearing on the wetness of July 2014). And systematic methods can be prey to all sorts of biases. If, for example, a Kantu’ farmer predicted that the water levels would be favourable one year, and so put all his fields next to the river, a single flood could wipe out his entire crop. For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify. And bird augury was an especially effective way to bring about that kind of diversification.
It makes sense that it should have taken Dove some 15 years to realise that randomness could be an asset. As moderns, we take it for granted that the best decisions stem from a process of empirical analysis and informed choice, with a clear goal in mind. That kind of decision-making, at least in theory, undergirds the ways that we choose political leaders, play the stock market, and select candidates for schools and jobs. It also shapes the way in which we critique the rituals and superstitions of others. But, as the Kantu’ illustrate, there are plenty of situations when random chance really is your best option. And those situations might be far more prevalent in our modern lives than we generally admit.
Over the millennia, cultures have expended a great deal of time, energy and ingenuity in order to introduce some element of chance into decision-making. Naskapi hunters in the Canadian province of Labrador would roast the scapula of a caribou in order to determine the direction of their next hunt, reading the cracks that formed on the surface of the bone like a map. In China, people have long sought guidance in the passages of the I Ching, using the intricate manipulation of 49 yarrow stalks to determine which section of the book they ought to consult. The Azande of central Africa, when faced with a difficult choice, would force a powdery poison down a chicken’s throat, finding the answer to their question in whether or not the chicken survived – a hard-to-predict, if not quite random, outcome. (‘I found this as satisfactory a way of running my home and affairs as any other I know of,’ wrote the British anthropologist E E Evans-Pritchard, who adopted some local customs during his time with the Azande in the 1920s).
The list goes on. It could – it does – fill books. As any blackjack dealer or tarot reader might tell you, we have a love for the flip of the card. Why shouldn’t we? Chance has some special properties. It is a swift, consistent, and (unless your chickens all die) relatively cheap decider. Devoid of any guiding mind, it is subject to neither blame nor regret. Inhuman, it can act as a blank surface on which to descry the churning of fate or the work of divine hands. Chance distributes resources and judges disputes with perfect equanimity.
The sanitising effect of augury cleans out any bad reasons
Above all, chance makes its selection without any recourse to reasons. This quality is perhaps its greatest advantage, though of course it comes at a price. Peter Stone, a political theorist at Trinity College, Dublin, and the author of The Luck of the Draw: The Role of Lotteries in Decision Making (2011), has made a career of studying the conditions under which such reasonless-ness can be, well, reasonable.
‘What lotteries are very good for is for keeping bad reasons out of decisions,’ Stone told me. ‘Lotteries guarantee that when you are choosing at random, there will be no reasons at all for one option rather than another being selected.’ He calls this the sanitising effectof lotteries – they eliminate all reasons from a decision, scrubbing away any kind of unwanted influence. As Stone acknowledges, randomness eliminates good reasons from the running as well as bad ones. He doesn’t advocate using chance indiscriminately. ‘But, sometimes,’ he argues, ‘the danger of bad reasons is bigger than the loss of the possibility of good reasons.’
For an example, let’s return to the Kantu’. Besides certain basic characteristics, when it comes to selecting a swidden site in the forest, there are nogood reasons by which to choose a site. You just don’t know what the weather and pests will look like. As a result, any reasons that a Kantu’ farmer uses will either be neutral, or actively harmful. The sanitising effect of augury cleans out those bad reasons. The Kantu’ also establish fields in swampland, where the characteristics of a good site are much more predictable – where, in other words, good reasons are abundant. In the swamps, as it happens, the Kantu’ don’t use augury to make their pick.
Thinking about choice and chance in this way has applications outside rural Borneo, too. In particular, it can call into question some of the basic mechanisms of our rationalist-meritocratic-democratic system – which is why, as you might imagine, a political theorist such as Stone is so interested in randomness in the first place.
Around the same time that Michael Dove was pondering his riddle in a Kantu’ longhouse, activists and political scientists were beginning to revive the idea of filling certain political positions by lottery, a process known as sortition.
The practice has a long history. Most public officials in democratic Athens were chosen by lottery, including the nine archons who were chosen by sortition from a significant segment of the population. The nobles of Renaissance Venice used to select their head of state, the doge, through a complicated, partially randomised process. Jean-Jacques Rousseau, in The Social Contract (1762), argued that lotteries would be the norm in an ideal democracy, giving every citizen an equal chance of participating in every part of the government (Rousseau added that such ideal democracies did not exist). Sortition survives today in the process of jury selection, and it crops up from time to time in unexpected places. Ontario and British Columbia, for example, have used randomly selected panels of Canadian citizens to propose election regulations.
Advocates of sortition suggest applying that principle more broadly, to congresses and parliaments, in order to create a legislature that closely reflects the actual composition of a state’s citizenship. They are not (just to be clear) advocating that legislators randomly choosepolicies. Few, moreover, would suggest that non-representative positions such as the US presidency be appointed by a lottery of all citizens. The idea is not to banish reason from politics altogether. But plenty of bad reasons can influence the election process – through bribery, intimidation, and fraud; through vote-purchasing; through discrimination and prejudices of all kinds. The question is whether these bad reasons outweigh the benefits of a system in which voters pick their favourite candidates.
By way of illustration: a handful of powerful families and influential cliques dominated Renaissance Venice. The use of sortition in selection of the doge, writes the historian Robert Finlay in Politics in Renaissance Venice (1980), was a means of ‘limiting the ability of any group to impose its will without an overwhelming majority or substantial good luck’. Americans who worry about unbridled campaign-spending by a wealthy few might relate to this idea.
Or consider this. In theory, liberal democracies want legislatures that accurately reflect their citizenship. And, presumably, the qualities of a good legislator (intelligence, integrity, experience) aren’t limited to wealthy, straight, white men. The relatively homogeneous composition of our legislatures suggests that less-than-ideal reasons are playing a substantial role in the electoral process. Typically, we just look at this process and wonder how to eliminate that bias. Advocates of sortition see conditions ripe for randomness.
Once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance
It’s not only politics where the threat of bad reasons, or a lack of any good reasons, makes the luck of the draw seem attractive. Take college admissions. When Columbia University accepts just 2,291 of its roughly 33,000 applicants, as it did this year, it’s hard to imagine that the process was based strictly on good reasons. ‘College admissions are already random; let’s just admit it and begin developing a more effective system,’ wrote the education policy analyst Chad Aldeman on the US daily news site Inside Higher Ed backin 2009. He went on to describe the notion of collegiate meritocracy as ‘a pretension’ and remarked: ‘A lottery might be the answer.’
The Swarthmore College professor Barry Schwartz, writing in The Atlantic in 2012, came to a similar conclusion. He proposed that, once schools have narrowed down their applicant pools to a well-qualified subset, they could just draw names. Some schools in the Netherlands already use a similar system. ‘A lottery like this won’t correct the injustice that is inherent in a pyramidal system in which not everyone can rise to the top,’ wrote Schwartz. ‘But it will reveal the injustice by highlighting the role of contingency and luck.’ Once certain standards are met, no really good reasons remain to discriminate between applicant No 2,291 (who gets into Columbia) and applicant No 2,292 (who does not). And once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance.
But perhaps not the most popular one. When randomness is added to a supposedly meritocratic system, it can inspire quite a backlash. In 2004, the International Skating Union (ISU) introduced a new judging system for figure-skating competitions. Under this system – which has since been tweaked – 12 judges evaluated each skater, but only nine of those votes, selected at random, actually counted towards the final tally (the ancient Athenians judged drama competitions in a similar way). Figure skating is a notoriously corrupt sport, with judges sometimes forming blocs that support each other’s favoured skaters. In theory, a randomised process makes it harder to form such alliances. A tit-for-tat arrangement, after all, doesn’t work as well if it’s unclear whether your partners will be able to reciprocate.
But the new ISU rules did more than simply remove a temptation to collude. As statisticians pointed out, random selection will change the outcome of some events. Backing their claims with competition data, they showed how other sets of randomly selected votes would have yielded different results, actually changing the line-up of the medal podium in at least one major competition. Even once all the skaters had performed, ultimate victory depended on the luck of the draw.
There are two ways to look at this kind of situation. The first way – the path of outrage – condemns a system that seems fundamentally unfair. A second approach would be to recognise that the judging process is already subjective and always will be. Had a different panel of 12 judges been chosen for the competition, the result would have varied, too. The ISU system simply makes that subjectivity more apparent, even as it reduces the likelihood that certain obviously bad influences, such as corruption, will affect the final result.
Still, most commentators opted for righteous outrage. That isn’t surprising. The ISU system conflicts with two common modern assumptions: that it is always desirable (and usually possible) to eliminate uncertainty and chance from a situation; and that achievement is perfectly reflective of effort and talent. Sortition, college admission lotteries, and randomised judging run against the grain of both of these premises. They embrace uncertainty as a useful part of their processes, and they fail to guarantee that the better citizen or student or skater, no matter how much she drives herself to success, will be declared the winner.
Let me suggest that, in the fraught and unpredictable world in which we live, both of those ideals – total certainty and perfect reward – are delusional. That’s not to say that we shouldn’t try to increase knowledge and reward success. It’s just that, until we reach that utopia, we might want to come to terms with the reality of our situation, which is that our lives are dominated by uncertainty, biases, subjective judgments and the vagaries of chance.
In the novel The Man in the High Castle (1962), the American sci-fi maestro Philip K Dick imagines an alternative history in which Germany and Japan win the Second World War. Most of the novel’s action takes place in Japanese-occupied San Francisco, where characters, both Japanese and American, regularly use the I Ching to guide difficult decisions in their business lives and personal affairs.
Something, somewhere, is always playing dice
As an American with no family history of divination, I’ll admit to being enchanted by Dick’s vision of a sci-fi world where people yield some of their decision-making power to the movements of dried yarrow stems. There’s something liberating, maybe, in being able to acknowledge that the reasons we have are often inadequate, or downright poor. Without needing to impose any supernatural system, it’s not hard to picture a society in which chance plays a more explicit, more accepted role in the ways in which we distribute goods, determine admissions to colleges, give out jobs to equally matched applicants, pick our elected leaders, and make personal decisions in our own lives.
Such a society is not a rationalist’s nightmare. Instead, in an uncertain world where bad reasons do determine so much of what we decide, it’s a way to become more aware of what factors shape the choices we make. As Peter Stone told me, paraphrasing Immanuel Kant, ‘the first task of reason is to recognise its own limitations’. Nor is such a society more riddled with chanciness than our own. Something, somewhere, is always playing dice. The roles of coloniser and colonised, wealthy and poor, powerful and weak, victor and vanquished, are rarely as predestined as we imagine them to be.
Dick seems to have understood this. Certainly, he embraced chance in a way that few other novelists ever have. Years after he wrote The Man in the High Castle, Dick explained to an interviewer that, setting aside from planning and the novelist’s foresight, he had settled key details of the book’s plot by flipping coins and consulting the I Ching.
Peter Doyle claims there was a “fundamental illegitimacy” in Christine Lagarde’s appointment
A top economist at the International Monetary Fund has poured scorn on its “tainted” leadership and said he is “ashamed” to have worked there.
Peter Doyle said in a letter to the IMF executive board that he wanted to explain his resignation after 20 years.
He writes of “incompetence”, “failings” and “disastrous” appointments for the IMF’s managing director, stretching back 10 years.
No one from the Washington-based IMF was immediately available for comment.
Mr Doyle, former adviser to the IMF’s European Department, which is running the bailout programs for Greece, Portugal and Ireland, said the Fund’s delay in warning about the urgency of the global financial crisis was a failure of the “first order”.
In the letter, dated 18 June and obtained by the US broadcaster CNN, Mr Doyle said the failings of IMF surveillance of the financial crisis “are, if anything, becoming more deeply entrenched”.
He writes: “This fact is most clear in regard to appointments for managing director which, over the past decade, have all-too-evidently been disastrous.
“Even the current incumbent [Christine Lagarde] is tainted, as neither her gender, integrity, or elan can make up for the fundamental illegitimacy of the selection process.”
Mr Doyle is thought to be echoing here widespread criticism that the head of the IMF is always a European, while the World Bank chief is always a US appointee.
Mr Doyle concludes his letter: “There are good salty people here. But this one is moving on. You might want to take care not to lose the others.”
The IMF could not be reached immediately by the BBC. However, CNN reported that a Fund spokesman told it that there was nothing to substantiate Mr Doyle’s claims and that the IMF had held its own investigations into surveillance of the financial crisis.
Andrew WalkerBBC World Service Economics correspondent
Peter Doyle’s letter is short but the criticism excoriating. Perhaps the bigger of the two main charges is that the IMF failed to warn enough about the problems that led to the global financial crises.
The IMF has had investigations which have, up to a point, made similar criticisms, but not in such inflammatory terms. The IMF did issue some warnings, but the allegation that they were not sustained or timely enough and were actively suppressed raises some very big questions about the IMF’s role.
Then there is the description of the managing director as tainted. It’s not personal. It’s a familiar attack on a process which always selects a European. It’s still striking, though, to hear it from someone so recently on the inside.