Arquivo da categoria: comunicação científica

>On Climate Models, the Case For Living with Uncertainty

>
Yale Environment 360
05 Oct 2010: Analysis

As climate science advances, predictions about the extent of future warming and its effects are likely to become less — not more — precise. That may make it more difficult to convince the public of the reality of climate change, but it hardly diminishes the urgency of taking action.

by Fred Pearce

I think I can predict right now the headlines that will follow publication of the next report from the Intergovernmental Panel on Climate Change (IPCC), due in 2013. “Climate scientists back off predicting rate of warming: ‘The more we know the less we can be sure of,’ says UN panel.”

That is almost bound to be the drift if two-time IPCC lead author Kevin Trenberth and others are right about what is happening to the new generation of climate models. And with public trust in climate science on the slide after the various scandals of the past year over e-mails and a mistaken forecast of Himalayan ice loss, it hardly seems likely scientists will be treated kindly.

It may not matter much who is in charge at the IPCC by then: Whether or not current chairman Rajendra Pachauri keeps his job, the reception will be rough. And if climate negotiators have still failed to do a deal to replace the Kyoto Protocol, which lapses at the end of 2012, the fallout will not be pretty, either diplomatically or climatically.

Clearly, concerns about how climate scientists handle complex issues of scientific uncertainty are set to escalate. They were highlighted in a report about IPCC procedures published in late August in response to growing criticism about IPCC errors. The report highlighted distortions and exaggerations in IPCC reports, many of which involved not correctly representing uncertainty about specific predictions.

“The latest climate modeling runs are trying to deal with a range of factors not dealt with in the past.”

But efforts to rectify the problems in the next IPCC climate-science assessment (AR5) are likely to further shake public confidence in the reliability of IPCC climate forecasts.

Last January, Trenberth, head of climate analysis at the National Center for Atmospheric Research in Boulder, Colo., published a little-noticed commentary in Nature online. Headlined “More Knowledge, Less Certainty,” it warned that “the uncertainty in AR5’s predictions and projections will be much greater than in previous IPCC reports.” He added that “this could present a major problem for public understanding of climate change.” He can say that again.

This plays out most obviously in the critical estimate of how much warming is likely between 1990, the baseline year for most IPCC work, and 2100. The current AR4 report says it will be between 1.8 and 4.0 degrees Celsius (3 to 7 degrees F). But the betting is now that the range offered next time will be wider, especially at the top end.

The public has a simple view about scientific uncertainty. It can accept that science doesn’t have all the answers, and that scientists try to encapsulate those uncertainties with devices like error bars and estimates of statistical significance. What even the wisest heads will have trouble with, though, is the notion that greater understanding results in wider errors bars than before.

Trenberth explained in his Nature commentary why a widening is all but certain. “While our knowledge of certain factors [responsible for climate change] does increase,” he wrote, “so does our understanding of factors we previously did not account for or even recognize.” The trouble is this sounds dangerously like what Donald Rumsfeld, in the midst of the chaos of the Iraq War, famously called “unknown unknowns.” I would guess that the IPCC will have even less luck than he did in explaining what it means by this.

The latest climate modeling runs are trying to come to grips with a range of factors ignored or only sketchily dealt with in the past. The most troubling is the role of clouds. Clouds have always been recognized as a ticking timebomb in climate models, because nobody can work out whether warming will change them in a way that amplifies or moderates warming — still less how much. And their influence could be very large. “Clouds remain one of the largest uncertainties in the climate system’s response to temperature changes,” says Bruce Wielicki, a scientist at NASA’s Langley Research Center who is investigating the impact of clouds on the Earth’s energy budget.

An added problem in understanding clouds is the role of aerosols from industrial smogs, which dramatically influence the radiation properties of clouds. “Aerosols are a mess,” says Thomas Charlock, a senior scientist at the Langley Research Center and co-investigator in a NASA project known as Clouds and the Earth’s Radiant Energy System (CERES). “We don’t know how much is out there. We just can’t estimate their influence with calculations alone.”

“Despite much handwringing, the IPCC has never worked out how to make sense of uncertainty.”

Trenberth noted in Nature, “Because different groups are using relatively new techniques for incorporating aerosol effects into the models, the spread of results will probably be much larger than before.”

A second problem for forecasting is the potential for warming to either enhance or destabilize existing natural sinks of carbon dioxide and methane in soils, forests, permafrost, and beneath the ocean. Again these could slow warming through negative feedbacks or — more likely, according to recent assessments — speed up warming, perhaps rather suddenly as the planetary system crosses critical thresholds.

The next models will be working hard to take these factors into better account. Whether they go as far as some preliminary runs published in 2005, which suggested potential warming of 10 degrees C (18 degrees F) or more is not clear. Of course, uncertainty is to be expected, given the range of potential feedbacks that have to be taken into account. But it is going to be hard to explain why, when you put more and better information into climate models, they do not home in on a more precise answer.

Yet it will be more honest, says Leonard Smith, a mathematician and statistician at the University of Oxford, England, who warns about the “naive realism” of past climate modeling. In the past, he says, models have been “over-interpreted and misinterpreted. We need to drop the pretense that they are nearly perfect. They are getting better. But as we change our predictions, how do we maintain the credibility of the science?”

The only logical conclusion for a confused and increasingly wary public may be that if the error bars were wrong before, they cannot be trusted now. If they do not in some way encapsulate the “unknowns,” what purpose do they have?

Despite much handwringing, the IPCC has never worked out how to make sense of uncertainty. Take the progress of those errors bars in assessing warming between 1990 and 2100.

The panel’s first assessment, published back in 1990, predicted a warming of 3 degrees C by 2100, with no error bars. The second assessment, in 1995, suggested a warming of between 1 and 3.5 degrees C. The third, in 2001, widened the bars to project a warming of 1.4 to 5.8 degrees C. The fourth assessment in 2007 contracted them again, from 1.8 to 4.0 degrees C. I don’t think the public will be so understanding if they are widened again, but that now seems likely.

Trenberth is nobody’s idea of someone anxious to rock the IPCC boat. He is an IPCC insider, having been lead author on key chapters in both 2001 and 2007, and recently appointed as a review editor for AR5. Back in 2005 he made waves by directly linking Hurricane Katrina to global warming. But in the past couple of years he has taken a growing interest in highlighting uncertainties in the climate science.

Late last year, bloggers investigating the “climategate” emails highlighted a message he sent to colleagues in which he said it was a “travesty” that scientists could not explain cool years like 2008. His point, made earlier in the journal Current Opinion in Environmental Stability, was that “it is not a sufficient explanation to say that a cool year is due to natural variability.” Such explanations, he said, “do not provide the physical mechanisms involved.” He wanted scientists to do better.

“Trenberth questioned if the IPCC wouldn’t be better off getting out of the prediction business.”

In his Nature commentary, Trenberth wondered aloud whether the IPCC wouldn’t be better off getting out of the prediction business. “Performing cutting edge science in public could easily lead to misinterpretation,” he wrote. But the lesson of climategate is that efforts to keep such discussion away from the public have a habit of backfiring spectacularly.

All scientific assessments have to grapple with how to present uncertainties. Inevitably they make compromises between the desire to convey complexity and the need to impart clear and understandable messages to a wider public. But the IPCC is caught on a particular dilemma because its founding purpose, in the late 1980s, was to reach consensus on climate science and report back to the world in a form that would allow momentous decisions to be taken. So the IPCC has always been under pressure to try to find consensus even where none exists. And critics argue that that has sometimes compromised its assessments of uncertainty.

The last assessment was replete with terms like “extremely likely” and “high confidence.” Critics charged that they often lacked credibility. And last August’s blue-chip review of the IPCC’s performance, by the InterAcademy Council, seemed to side with the critics.

The council’s chairman, Harold Shapiro of Princeton, said existing IPCC guidelines on presenting uncertainty “have not been consistently followed.” In particular, its analysis of the likely impacts of climate change “contains many statements that were assigned high confidence but for which there is little evidence.” The predictions were not plucked from the air. But the charge against the IPCC is that its authors did not always correctly portray the uncertainty surrounding the predictions or present alternative scenarios.

“We need to get used to greater uncertainty in imagining exactly how climate change will play out.”

The most notorious failure was the claim that the Himalayan glaciers could all have melted by 2035. This was an egregious error resulting from cut-and-pasting a non-peer reviewed claim from a report by a non-governmental organization. So was a claim that 55 percent of the Netherlands lies below sea level. But other errors were failures to articulate uncertainties. The study highlighted a claim that even a mild loss of rainfall over the Amazon could destroy 40 percent of the rainforest, though only one modeling study has predicted this.

Another headline claim in the report, in a chapter on Africa, was that “projected reductions in [crop] yield in some countries could be as much as 50 percent by 2020.” The only source was an 11-page paper by a Moroccan named Ali Agoumi that covered only three of Africa’s 53 countries (Morocco, Tunisia, and Algeria) and had not gone through peer review. It simply asserted that “studies on the future of vital agriculture in the region have shown… deficient yields from rain-based agriculture of up to 50 percent during the 2000-2020 period.” No studies were named. And even Agoumi did not claim the changes were necessarily caused by climate change. In fact, harvests in North Africa already differ by 50 percent or more from one year to the next, depending on rainfall. In other words, Agoumi’s paper said nothing at all about how climate change might or might not change farm yields across Africa. None of this was conveyed by the report.

In general, the InterAcademy Council’s report noted a tendency to “emphasise the negative impacts of climate change,” many of which were “not supported sufficiently in the literature, not put into perspective, or not expressed clearly.” Efforts to eliminate these failings will necessarily widen the error bars on a range of predictions in the next assessment.

We are all — authors and readers of IPCC reports alike — going to have to get used to greater caution in IPCC reports and greater uncertainty in imagining exactly how climate change will play out. This is probably healthy. It is certainly more honest. But it in no way undermines the case that we are already observing ample evidence that the world is on the threshold of profound and potentially catastrophic warming. And it in no way undermines the urgent need to do something to halt the forces behind the warming.

Some argue that scientific uncertainty should make us refrain from action to slow climate change. The more rational response, given the scale of what we could face, is the precise opposite.

POSTED ON 05 Oct 2010 IN Biodiversity Climate Climate Policy & Politics Science & Technology Australia

Anúncios

>In Science We Trust: Poll Results on How You Feel about Science (Scientific American)

>
Our Web survey of readers suggests that the scientifically literate public still trusts its experts—with some important caveats

By The Editors September 22, 2010 11

Scientists have had a rough year. The leaked “Climategate” e-mails painted researchers as censorious. The mild H1N1 flu out­break led to charges that health officials exaggerated the danger to help Big Pharma sell more drugs. And Harvard University in­vestigators found shocking holes in a star professor’s data. As policy decisions on climate, energy, health and technology loom large, it’s important to ask: How badly have recent events shaken people’s faith in science? Does the public still trust scientists?

To find out, Scientific American partnered with our sister publication, Nature, the international journal of science, to poll readers online. More than 21,000 people responded via the Web sites of Nature and of Scientific American and its international editions. As expected, it was a supportive and science-literate crowd—19 percent identified themselves as Ph.Ds. But attitudes differed widely depending on particular issues—climate, evolution, technology—and on whether respondents live in the U.S., Europe or Asia.

How Much Do People Trust What Scientists Say?

We asked respondents to rank how much they trusted various groups of people on a scale of 1 (strongly distrust) to 5 (strongly trust). Scientists came out on top by a healthy margin. When we asked how much people trust what scientists say on a topic-by-topic basis, only three topics (including, surprisingly, evolution) garnered a stronger vote of confidence than scientists did as a whole.

When Science Meets Politics: A Tale of Three Nations

Should scientists get involved in politics? Readers differ widely depending on where they are from. Germany, whose top politician has a doctorate in quantum chemistry, seems to approve of scientists playing a big role in politics. Not so in China. Even though most leaders are engineers, Chinese respondents were much less keen than their German or U.S. counterparts to see scientists in political life.

Build Labs, Not Guns

More than 70 percent of respondents agreed that in tough economic times, science funding should be spared. When asked what should be cut instead, defense spending was the overwhelming pick.

Techno Fears

Technology can lead to unintended consequences. We asked readers what technological efforts need to be reined in—or at least closely monitored. Surprisingly, more respondents were concerned about nuclear power than artificial life, stem cells or genetically modified crops.

U.S. vs. Europe

Europeans and Americans differ sharply in their attitudes toward technology. Higher proportions of respondents from Europe worry about nuclear power and genetically modified crops than those from the U.S. (In this grouping, Europe includes Belgium, France, Germany, Italy and Spain, but not Britain, where opinion is more closely aligned with that of the U.S.) In both Europe and the U.S., nanotechnology seems to be a great unknown. Europeans also expressed a mistrust of what scientists have to say about flu pandemics.


Suspicion Over the Flu

On June 11, 2009, the Geneva-based World Health Organization de­­clared the H1N1 flu outbreak a pandemic, confirming what virologists already knew—that the flu virus had spread throughout the world. Governments called up billions of dollars’ worth of vaccines and antiretroviral drugs, a medical arsenal that stood ready to combat a virus that, thankfully, turned out to be mild.

A year later two European studies charged that the WHO’s decision-making process was tainted by conflicts of interest. In 2004 a WHO committee recommended that governments stockpile antiretroviral drugs in times of pandemic; the scientists on that committee were later found to have ties to drug companies. The WHO has refused to identify the scientists who sat on last year’s committee that recommended the pandemic declaration, leading to suspicions that they might have ties to industry as well.

The controversy got a lot of press in Eu­­rope—­the Daily Mail, a British tabloid, declared: “The pandemic that never was: Drug firms ‘encouraged world health body to exaggerate swine flu threat’”; the controversy in the U.S. garnered little mention.

The brouhaha seems to have influenced opinion markedly in Europe. Nearly 70 percent of U.S. respondents in our survey trusted what scientists say about flu pandemics; in Europe, only 31 percent felt the same way. The figures represented the largest split between the U.S. and Europe on any issue in the poll.

Climate Denial on the Decline

Numerous polls show a decline in the percentage of Americans who believe humans affect climate, but our survey suggests the nation is not among the worst deniers. (Those are France, Japan and Australia.) Attitudes, however, may be shifting the other way. Among those respondents who have changed their opinions in the past year, three times more said they are more certain than less certain that humans are changing the climate.

>“Na Ciência Nós Acreditamos” (Pesquisa Fapesp)

>
Sondagem da Nature e Scientific American feita em 18 países, Brasil inclusive, indica que as pessoas acreditam na ciência e nos cientistas

Edição Online – 22/09/2010

Uma tomada de opinião feita pela internet com mais de 21 mil leitores de 18 países, inclusive o Brasil, das revistas Nature e das edições americana e internacionais da Scientific American indica que a credibilidade da ciência e dos cientistas é alta. Feita sem qualquer metodologia científica, como ressaltam seus próprios autores, a enquete divulgada hoje (22/09) mostra que os leitores acreditam mais na palavra dos cientistas do que na de qualquer outro grupo de pessoas.

Numa escala de confiança que variava de zero a cinco, os cientistas receberam a nota média de 3,98, praticamente 4. Em segundo lugar, empatados com a nota 3.09, vieram os grupos de amigos/familiares e as entidades não-governamentais. A seguir apareceram, por ordem decrescente de credibilidade, os grupos de defesa dos cidadãos (2.69), os jornalistas (2.57), as empresas (1.78), os políticos eleitos (1.76) e as autoridades religiosas (1.55). Para realçar a boa avaliação feita do trabalho dos cientistas, a Scientific American usou o título “In Science We Trust“ (“Na Ciência Nós Acreditamos’) em sua reportagem sobre a enquete.

A confiança dos leitores no que os cientistas dizem oscila significativamente de acordo com o tema em questão. Evolução foi o assunto em que a palavra dos pesquisadores recebeu a melhor avaliação no quesito confiabilidade. Obteve a nota 4.3, novamente numa escala que variava de zero a cinco. Também mereceram notas elevadas as opiniões dos cientistas sobre os seguintes temas: energias renováveis (4.08), origem do universo (4) e células-tronco (3.97). Os assuntos em que os leitores menos confiam nos cientistas foram: pandemia de gripe aviária (3.19), drogas para depressão (3.21) e pesticidas (3.33).

A sondagem também mostrou que 89% dos leitores dizem que investir em ciência básica pode não produzir efeitos econômicos imediatos, mas é uma forma de construir as bases para o crescimento futuro. Para 75% dos entrevistados pela enquete, em nome da preservação das verbas para ciência, os governos deveriam cortar os gastos com o setor de armamentos e defesa nacional.

O maior temor tecnológico dos leitores ainda são as usinas atômicas. Quase metade dos entrevistados acredita que formas mais limpas de energia deveriam substituir a nuclear. O segundo tema que mais preocupa as pessoas são possíveis riscos desconhecidos do emprego da nanotecnologia, uma questão citada por 26% dos entrevistados.

Brasil e diferenças regionais – As revistas Nature e Scientific American sabem que a sondagem online não reflete a visão de toda a população dos países em que ela foi realizada. “Muitos dos resultados batem com a opinião de um grupo de pessoas bem informada sobre ciência”, escreveu a Nature. Afinal, 19% das pessoas que participaram da enquete disseram ter o título de doutor. Foi uma elite, portanto, que respondeu a sondagem. Além disso, a amostra de leitores de cada país é desproporcional ao número de habitantes. Do Brasil, por exemplo, participaram 422 pessoas, cerca de 10% do número de norte-americanos.

Ainda assim, algumas diferenças regionais apareceram. Os europeus são os que mais temem os riscos associados ao uso da energia nuclear e possíveis problemas causados pelo cultivo de organismos geneticamente modificados. Já os americanos são os que menos se inquietam com essas questões. Os chineses são os que mais defendem a ideia de que os cientistas não devem se meter em política (os brasileiros foram a segunda nacionalidade que mais apoiou essa posição).

Refletindo a importância que o país ganhou recentemente no cenário internacional, Nature e Scientific American divulgaram alguns dados específicos sobre as respostas dadas pelos brasileiros. Em geral, os brasileiros ocuparam os lugares intermediários nas comparações entre os países. Não são os mais crentes nos cientistas e na ciência, mas também não são os mais descrentes.

No entanto, chamou atenção a quantidade de brasileiros (23.5%) que ainda têm dúvidas sobre as explicações dadas pela teoria evolutiva baseada no processo de seleção natural. Na China a descrença chega a quase metade dos entrevistados e no Japão a 35%. Mas na Alemanha e no Reino Unido o ceticismo sobre esse tema não alcança 10% e nos Estados Unidos está na casa dos 13%. No que diz respeito a admitir que as atividades humanas contribuem para mudar o clima global, os brasileiros foram tão assertivos quanto americanos, ingleses e britânicos: cerca de 80% concordaram com essa afirmação.

>The Climate Majority (N.Y. Times)

>
Op-Ed Contributor

By JON A. KROSNICK
Published: June 8, 2010

ON Thursday, the Senate will vote on a resolution proposed by Lisa Murkowski, Republican of Alaska, that would scuttle the Environmental Protection Agency’s plans to limit emissions of greenhouse gases by American businesses.

Passing the resolution might seem to be exactly what Americans want. After all, national surveys released during the last eight months have been interpreted as showing that fewer and fewer Americans believe that climate change is real, human-caused and threatening to people.

But a closer look at these polls and a new survey by my Political Psychology Research Group show just the opposite: huge majorities of Americans still believe the earth has been gradually warming as the result of human activity and want the government to institute regulations to stop it.

In our survey, which was financed by a grant to Stanford from the National Science Foundation, 1,000 randomly selected American adults were interviewed by phone between June 1 and Monday. When respondents were asked if they thought that the earth’s temperature probably had been heating up over the last 100 years, 74 percent answered affirmatively. And 75 percent of respondents said that human behavior was substantially responsible for any warming that has occurred.

For many issues, any such consensus about the existence of a problem quickly falls apart when the conversation turns to carrying out specific solutions that will be costly. But not so here.

Fully 86 percent of our respondents said they wanted the federal government to limit the amount of air pollution that businesses emit, and 76 percent favored government limiting business’s emissions of greenhouse gases in particular. Not a majority of 55 or 60 percent — but 76 percent.

Large majorities opposed taxes on electricity (78 percent) and gasoline (72 percent) to reduce consumption. But 84 percent favored the federal government offering tax breaks to encourage utilities to make more electricity from water, wind and solar power.

And huge majorities favored government requiring, or offering tax breaks to encourage, each of the following: manufacturing cars that use less gasoline (81 percent); manufacturing appliances that use less electricity (80 percent); and building homes and office buildings that require less energy to heat and cool (80 percent).

Thus, there is plenty of agreement about what people do and do not want government to do.

Our poll also indicated that some of the principal arguments against remedial efforts have been failing to take hold. Only 18 percent of respondents said they thought that policies to reduce global warming would increase unemployment and only 20 percent said they thought such initiatives would hurt the nation’s economy. Furthermore, just 14 percent said the United States should not take action to combat global warming unless other major industrial countries like China and India do so as well.

Our findings might seem implausible in light of recent polls that purport to show that Americans are increasingly skeptical about the very existence of climate change. But in fact, those polls did not produce conflicting evidence at all.

Consider, for example, the most publicized question from a 2009 Pew Research Center poll: “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?” This question measured perceptions of scientific evidence that the respondent has read or heard about, not the respondents’ personal opinions about whether the earth has been warming. Someone who has had no exposure to scientific evidence or who perceives the evidence to be equivocal may nonetheless be convinced that the earth has been heating up by, say, the early blossoming of plants in his garden.

Or consider a widely publicized Gallup question: “Thinking about what is said in the news, in your view, is the seriousness of global warming generally exaggerated, generally correct or is it generally underestimated?” This question asked about respondents’ perceptions of the news, not the respondents’ perception of warming. A person who believes climate change has been happening might also feel that news media coverage of it has been exaggerated.

Questions in other polls that sought to tap respondents’ personal beliefs about the existence and causes of warming violated two of the cardinal rules of good survey question design: ask about only one thing at a time, and choose language that makes it easy for respondents to understand and answer each question.

Imagine being asked this, from a poll by CNN: “Which of the following statements comes closest to your view of global warming: Global warming is a proven fact and is mostly caused by emissions from cars and industrial facilities like power plants and factories; global warming is a proven fact and is mostly caused by natural changes that have nothing to do with emissions from cars and industrial facilities; or, global warming is a theory that has not yet been proven.”

Notice that the question didn’t even offer the opportunity for respondents to say they believe global warming is definitely not happening — not the sort of question that will provide the most valid measurements.

When surveys other than ours have asked simple and direct questions, they have produced results similar to ours. For example, in November, an ABC News/Washington Post survey found that 72 percent of respondents said the earth has been heating up, and a December poll by Ipsos/McClatchy found this proportion to be 70 percent.

Our surveys did reveal a small recent decline in the proportion of people who believe global warming has been happening, from 84 percent in 2007 to 80 percent in 2008 to 74 percent today. Statistical analysis of our data revealed that this decline is attributable to perceptions of recent weather changes by the minority of Americans who have been skeptical about climate scientists.

In terms of average earth temperature, 2008 was the coldest year since 2000. Scientists say that such year-to-year fluctuations are uninformative, and people who trust scientists therefore ignore this information when forming opinions about global warming’s existence. Citizens who do not trust climate scientists, however, base their conclusions on their personal observations of nature. These low-trust individuals were especially aware of the recent decline in average world temperatures; they were the ones in our survey whose doubts about global warming have increased since 2007.

This explanation is especially significant, because it suggests that the small recent decline in the proportion of people who believe in global warming is likely to be temporary. If the earth’s temperature begins to rise again, these individuals may reverse course and rejoin the large majority who still think warming is real.

Growing public skepticism has, in recent months, been attributed to news reports about e-mail messages hacked from the computer system at the University of East Anglia in Britain (characterized as showing climate scientists colluding to silence unconvinced colleagues) and by the discoveries of alleged flaws in reports by the Intergovernmental Panel on Climate Change.

Our new survey discredited this claim in multiple ways. First, we found no decline in Americans’ trust in environmental scientists: 71 percent of respondents said they trust these scientists a moderate amount, a lot or completely, a figure that was 68 percent in 2008 and 70 percent in 2009. Only 9 percent said they knew about the East Anglia e-mail messages and believed they indicated that climate scientists should not be trusted, and only 13 percent of respondents said so about the I.P.C.C. reports’ alleged flaws.

Interestingly, Americans are not alone in having their views portrayed inaccurately. A February BBC News survey asked Britons, “From what you know and have heard, do you think that the earth’s climate is changing and global warming is taking place?” Seventy-five percent of respondents answered affirmatively, down a somewhat improbable eight percentage points from 83 percent in November. A BBC headline blared, “Climate Skepticism on the Rise,” when it should have proclaimed that a huge majority of Britons still share common ground with one another and with Americans on this issue.

GLOBAL warming has attracted what political scientists dub an “issue public”: millions of Americans who are passionate about this subject and put pressure on government to follow their wishes. For over a decade, this group has been of typical issue-public size, about 15 percent of American adults.

Although issue publics usually divide about equally on opposing sides — think of abortion or immigration — 88 percent of the climate change issue public in our survey believed that global warming has been happening; 88 percent attributed responsibility for it to human action; 92 percent wanted the federal government to limit the amount of greenhouse gases that businesses can emit. Put simply, the people whose votes are most powerfully shaped by this issue are sending a nearly unanimous signal to their elected representatives.

All this makes global warming a singular issue in American politics. Even as we are told that Americans are about equally divided into red and blue, a huge majority shares a common vision of climate change. This creates a unique opportunity for elected representatives to satisfy a lot of voters.

When senators vote on emissions limits on Thursday, there is one other number they might want to keep in mind: 72 percent of Americans think that most business leaders do not want the federal government to take steps to stop global warming. A vote to eliminate greenhouse gas regulation is likely to be perceived by the nation as a vote for industry, and against the will of the people.

Jon A. Krosnick is a professor of communication, political science and psychology at Stanford.

>Política incerta, economia incerta, clima incerto

>
A sucessão de catástrofes é casual ou causal?

Por Mario Soares*
IPS/Envolverde – 21/05/2010 – 10h05

Lisboa, maio/2010 – Até o próprio Pangloss, famoso personagem de Candide de Voltaire, apesar de seu imperturbável otimismo, se veria em dificuldades para enfrentar o mundo contemporâneo. A natureza e a humanidade deram rédeas aos seus respectivos demônios e ninguém pode detê-los. Em diferentes lugares, a Terra reage e nos assesta, sucessivamente: ciclones, maremotos, terremotos, inundações e, ultimamente, a erupção vulcânica na insular Islândia, que paralisou os aeroportos do norte e centro da Europa. Um espetáculo triste e jamais visto.

Trata-se de fenômenos naturais normais, dirão alguns, os menos avisados. Contudo, para aqueles que têm mais de oito décadas vividas, como é meu caso, e nunca viram nem tiveram conhecimento de nada semelhante a esta conjugação sucessiva de catástrofes, é prudente expor a dúvida: será que a mão inconsciente e imprevista do homem, que agride e maltrata o planeta e compromete seu equilíbrio natural, não tem uma boa dose de responsabilidade nestes fatos?

A recente Conferência de Cúpula sobre Mudança Climática em Copenhague, em dezembro passado, que deveria condenar e enfrentar o aquecimento global, resultou em fracasso devido ao suspeito acordo traçado na última hora por China e Estados Unidos. Por uma coincidência – ou talvez não –, estas duas grandes potências são os maiores contaminadores da Terra. A verdade é que conseguiram paralisar o grupo europeu – ao qual não deram a menor importância – e várias delegações procedentes de outros continentes, que esperavam resultados positivos da Conferência Mundial.

Talvez seja mais preocupante a aparição de alguns cientistas que adotam posturas abertamente contrárias ao pensamento e às advertências da esmagadora maioria dos ecologistas, já que afirmam que o aquecimento global não é causado pelas atividades humanas nem pelo abusivo emprego de combustíveis derivados dos hidrocarbonos. Afirmam e reiteram que se trata de um fato natural. Isto me faz pensar que há pessoas capazes de perseguir a todo custo a ganância e sobrepor a qualquer outra consideração a defesa de seus interesses imediatos sem que isso afete suas boas consciências… Se é que as têm.

Estou convencido de que na próxima Conferência Mundial sobre Mudança Climática a verdade científica prevalecerá e que as grandes potências serão obrigadas a respeitar as regras que objetivam conter radicalmente o aquecimento global.

Os riscos que pairam sobre o planeta não são apenas as catástrofes consideradas naturais que se sucedem com inquestionável e preocupante frequência. O terrorismo global continua causando estragos desde 2001, e atualmente são numerosas (excessivas, segundo meu ponto de vista) as nações que dispõem de armamento nuclear. É indispensável colocar um limite a isto. Neste sentido, o acordo que o presidente norte-americano, Barack Obama, conseguiu estabelecer com Rússia e China para reduzir os respectivos arsenais atômicos e obstruir a proliferação por parte de nações que ainda não os possuem – como é o caso do Irã – é um acontecimento notável e de projeções políticas e geoestratégicas extremamente positivas.

Em um mundo tão perigoso como o que nos cabe viver – basta pensar em todos os conflitos armados não resolvidos em todos os continentes –, é preciso reduzir drasticamente a venda livre de armas e propagar a Cultura de Paz, da qual é incansável promotor o ex-diretor-geral da Unesco, Federico Mayor Zaragoza. Ao mesmo tempo, deve-se evitar e controlar até onde for possível todas as formas de incitação à violência que os meios de comunicação, as televisões em particular, propagam constantemente (inconscientemente, ou não), no que não é exagerado qualificar como uma escalada inaceitável.

Todos os governos do mundo que se consideram Estados de Direito e que, portanto, devem respeitar e proteger os direitos humanos têm a consequente obrigação de adotar políticas e medidas para difundir a Cultura de Paz e repudiar, pedagógica e sistematicamente, todas as formas de violência que entram todos os dias em nossas casas para o bem de nossos descendentes e do futuro da humanidade.

Realmente, as ameaças que enfrentamos em nossa época provêm de diversas fontes: de uma política incerta e sem rumo claro, de uma economia sem regras e à espera de melhores dias – não sabemos quantos – para superar a crise, de uma sucessão de calamidades. Já é hora de a cidadania global abrir os olhos, reagir e exigir soluções. IPS/Envolverde

* Mário Soares é ex-presidente e ex-primeiro-ministro de Portugal.

>The root of the climate email fiasco (The Guardian)

>
Learning forced into silos of humanities and science has created closed worlds of specialists who just don’t understand each other

George Monbiot
The Guardian, Tuesday 6 April 2010

The MPs were kind to Professor Phil Jones. During its hearings, the Commons science and technology committee didn’t even ask the man at the centre of the hacked climate emails crisis about the central charge he faces: that he urged other scientists to delete material subject to a freedom of information request. Last week the committee published its report, and blamed his university for the “culture of non-disclosure” over which Jones presided.

Perhaps the MPs were swayed by the disastrous performance of his boss at the hearings. Edward Acton, the vice-chancellor of the University of East Anglia, came across as flamboyant, slippery and insincere. Jones, on the other hand, seemed both deathly dull and painfully honest. How could this decent, nerdy man have messed up so badly?

None of it made sense: the intolerant dismissal of requests for information, the utter failure to engage when the hacked emails were made public, the refusal by other scientists to accept that anything was wrong. Then I read an article by the computer scientist Steve Easterbrook, and for the first time the light began to dawn.

Easterbrook, seeking to defend Jones and his colleagues, describes a closed culture in which the rest of the world is a tedious and incomprehensible distraction. “Scientists normally only interact with other scientists. We live rather sheltered lives … to a scientist, anyone stupid enough to try to get scientific data through repeated FoI requests quite clearly deserves our utter contempt. Jones was merely expressing (in private) a sentiment that most scientists would share – and extreme frustration with people who clearly don’t get it.”

When I read that, I was struck by the gulf between our worlds. To those of us who clamoured for freedom of information laws in Britain, FoI requests are almost sacred. The passing of these laws was a rare democratic victory; they’re among the few means we possess of ensuring that politicians and public servants are answerable to the public. What scientists might regard as trivial and annoying, journalists and democracy campaigners see as central and irreducible. We speak in different tongues and inhabit different worlds.

I know how it happens. Like most people with a science degree, I left university with a store of recondite knowledge that I could share with almost no one. Ill-equipped to understand any subject but my own, I felt cut off from the rest of the planet. The temptation to retreat into a safe place was almost irresistible. Only the extreme specialisation demanded by a PhD, which would have walled me in like an anchorite, dissuaded me.

I hated this isolation. I had a passionate interest in literature, history, foreign languages and the arts, but at the age of 15 I’d been forced, like all students, to decide whether to study science or humanities. From that point we divided into two cultures, and the process made idiots of us all. Perhaps eventually we’ll split into two species. Reproducing only with each other, scientists will soon become so genetically isolated that they’ll no longer be able to breed with other humans.

We all detest closed worlds: the Vatican and its dismissal of the paedophilia scandals as “idle chatter”; the Palace of Westminster, whose members couldn’t understand the public outrage about their expenses; the police forces that refuse to discipline errant officers. Most of us would endorse George Bernard Shaw’s opinion that all professions are conspiracies against the laity. Much of the public hostility to science arises from the perception that it’s owned by a race to which we don’t belong.

But science happens to be the closed world with one of the most effective forms of self-regulation: the peer review process. It is also intensely competitive, and the competition consists of seeking to knock each other down. The greatest scientific triumph is to falsify a dominant theory. It happens very rarely, as only those theories which have withstood constant battery still stand. If anyone succeeded in overturning the canon of climate science, they would soon become as celebrated as Newton or Einstein. There are no rewards for agreeing with your colleagues, tremendous incentives to prove them wrong. These are the last circumstances in which a genuine conspiracy could be hatched.

But it is no longer sufficient for scientists to speak only to each other. Painful and disorienting as it is, they must engage with that irritating distraction called the rest of the world. Everyone owes something to the laity, and science would die if it were not for the billions we spend on it. Scientists need make no intellectual concessions, but they have a duty to understand the context in which they operate. It is no longer acceptable for climate researchers to wall themselves off and leave the defence of their profession to other people.

There are signs that this is changing. The prominent climate change scientist Simon Lewis has just sent a long submission to the Press Complaints Commission about misrepresentation in the Sunday Times. The paper claimed that the Intergovernmental Panel on Climate Change’s contention that global warming could destroy up to 40% of the Amazon rainforest “was based on an unsubstantiated claim by green campaigners who had little scientific expertise”. It quoted Lewis to suggest he supported the story. The article and its claims were reproduced all over the world.

But the claims were wrong: there is solid scientific research showing damage on this scale is plausible in the Amazon. Lewis claims that the Sunday Times falsely represented his views. He left a comment on the website but it was deleted. He sent a letter to the paper but it wasn’t published. Only after he submitted his complaint to the PCC did the Sunday Times respond to him. The paper left a message on his answerphone, which he has made public: “It’s been recognised that the story was flawed.” After seven weeks of stonewalling him, the Sunday Times offered to run his letter. But it has neither taken down the flawed article nor published a correction.

Good luck to Lewis, but as the PCC’s treatment of the News of the World phone-hacking scandal suggests, he’s likely to find himself shut out of another closed world – journalism – in which self-regulation manifestly doesn’t work. Here’s a profession that looks like a conspiracy against the laity even from the inside.

The incomprehension with which science and humanities students regard each other is a tragedy of lost opportunities. Early specialisation might allow us to compete in the ever more specialised labour market, but it equips us for nothing else. As Professor Don Nutbeam, the vice-chancellor of Southampton University, complains: “Young people learn more and more about less and less.”

We are deprived by our stupid schooling system of most of the wonders of the world, of the skills and knowledge required to navigate it, above all of the ability to understand each other. Our narrow, antiquated education is forcing us apart like the characters in a Francis Bacon painting, each locked in our boxes, unable to communicate.

>Brasileiro se preocupa com aquecimento global, mas muda pouco

>
Mudança latente

Por Ricardo Voltolini*, da Revista Ideia Socioambienltal
28/04/2010 – 11h04

Pesquisa do Datafolha divulgada no último dia 21 de abril revela que pouco mais de nove entre 10 brasileiros acreditam no fenômeno do aquecimento global. Três quartos dos entrevistados acham que a ação humana é a grande responsável pelas mudanças climáticas.

Os números diferem muito dos registrados em estudos com americanos e ingleses. Nos EUA, metade dos cidadãos não crê na responsabilidade do homem pelo aquecimento global. Na Inglaterra, são 25%. Nesses países, mais do que aqui, o recente ataque dos negacionistas climáticos –que tem confrontado duramente as pesquisas do painel de cientistas do clima das Nações Unidas – fez crescer o número de céticos.

Especialmente no caso dos Estados Unidos, ideias que contestam ou atenuam o impacto humano nas mudanças climáticas costumam ter boa aceitação seja porque oferecem salvo-conduto para não deixar de lançar gases de efeito estufa seja porque reduzem a culpa por um estilo de vida considerado perdulário para o planeta muito conveniente. O país é, como se sabe, o maior emissor de CO2. E, em dezembro último, seu presidente, Barack Obama, ajudou a desandar o acordo do clima justamente por não aceitar metas de redução de emissões mais ambiciosas. Para os EUA e –também para a China, sua grande concorrente no mercado global– diminuir emissões significa abrir mão de crescimento, coisa que causa arrepios no norte-americano médio e seus representantes políticos no senado.

Outros números do estudo do Datafolha merecem atenção. Segundo os dados, o número de brasileiros que se consideram bem informados sobre o tema saltou de 20% (em 2009) para 34%. Isso é bom, claro. Talvez signifique um primeiro passo. Mas sentir-se bem informado não quer dizer estar preparado para fazer as mudanças individuais necessárias visando a reduzir o impacto ao planeta.

Nesse sentido, apenas para estimular uma reflexão, lembro de uma pesquisa feita pela Market Analysis, em 2007, em 18 países. Aquele estudo, o primeiro do gênero no País, revelou que os brasileiros estavam entre os mais preocupados do mundo com as mudanças climáticas. No entanto, 46% achavam que um indivíduo pode fazer muito pouco diante de um problema tão grave.

Considerando as variáveis competência e capacidade para mudar o quadro, o estudo identificou quatro grupos. O mais numeroso (40%) seria formado por pessoas com bom nível de informação sobre o aquecimento global, alinhadas com a atuação das ONG´s, críticas em relação às empresas, mas que não necessariamente fazem algo para mudar seu dia a dia. Apenas um em cada seis integrantes desse grupo, no entanto, mostrava-se consciente e mobilizado.

O segundo grupo reunia 38% de brasileiros bem informados sobre o problema, dispostos a adotar mudanças em seu estilo de vida e sensíveis à idéia de que é possível conciliar crescimento econômico com respeito ao meio ambiente. Eles acreditavam que, individualmente, podiam dar uma resposta mais clara do que a sociedade como um todo. O terceiro grupo (12%) confiava mais na sociedade do que em sua própria capacidade de mudar a situação. E o quarto (10%) não acreditava nem no potencial do indivíduo nem no da sociedade. Ambos se caracterizavam por uma postura desinformada e acrítica.

A considerar que esses dados seguem atuais –e penso honestamente que sim- são grandes os desafios brasileiros. O mais importante é mobilizar os indivíduos, fazendo com que percebam que pequenas ações de redução de pegada ecológica somadas a outras ações de consumo consciente no dia a dia podem fazer diferença na luta para esfriar o planeta. Como já foi dito logo após o fracasso de Copenhague, o aquecimento global é um tema importante demais para esperar que as soluções venham apenas de líderes de estado comprometidos mais com a sua política doméstica do que com o futuro saudável da grande casa que habitamos.

*Ricardo Voltolini é publisher da revista Idéia Socioambiental e diretor da consultoria Idéia Sustentável: Estratégia e Inteligência em Sustentabilidade.

http://www.topblog.com.br/sustentabilidade

(Envolverde/Idéia Socioambiental)

>Mudanças climáticas: "caça às bruxas" direcionada a cientistas na Virginia (EUA)

>
An unwelcome ‘climate’ for scientists?

By Paul Guinnessy, Physics Today on May 11, 2010 6:34 PM

Virginia Attorney General Ken Cuccinelli, in a blatantly political move to help strengthen his support among the right wing for his bid to become the next governor, is causing uproar in the science community by investigating climate scientist and former University of Virginia professor Michael Mann.

Cuccinelli is accusing Mann of defrauding Virginia taxpayers by receiving research grants to study global temperatures. Mann, who is now based at the Pennsylvania State University, hasn’t worked in Virginia since 2005.

The subpoena, which currently isn’t attached to any lawsuit, requires the University of Virginia to provide Cuccinelli with thousands of documents and e-mails dating from 1999 to 2005 regarding Mann’s research. The accusation is tied to Mann and coworkers’ “hockey stick” graph that was included in a 2001 United Nations Intergovernmental Panel on Climate Change report. The graph displays annual global average temperatures by merging a wide variety of data sources that were used in some private e-mails made public when the University of East Anglia’s Climate Research Unit e-mail server got hacked.

Not answering the question

When Cuccinelli appeared on the Kojo Nnamdi Show on WAMU radio on Friday, he claimed the investigation was not into Mann’s academic work, but instead was “directed at the expenditure of dollars. Whether he does a good job, bad job or I don’t like the outcome—and I think everybody already knows his position on some of this is one that I question. But that is not what that’s about.”

However, the letter demanding materials gives a different impression. It asks, along with Mann’s correspondence with 39 other climate scientists, for “any and all computer algorithms, programs, source code, or the like created or edited by … Mann.”

This was emphasized when Cuccinelli spoke to the Washington Post, stating “in light of the Climategate e-mails, there does seem to at least be an argument to be made that a course was undertaken by some of the individuals involved, including potentially Michael Mann, where they were steering a course to reach a conclusion. Our act, frankly, just requires honesty.”

There hasn’t been an investigation by Virginia’s attorney general’s office into the funding of research grants of this nature before. Moreover, only one of the five grants under suspicion was funded by Virginia taxpayers through the university; the others were federal grants from the National Oceanic and Atmospheric Administration and the National Science Foundation.

No backbone?

The University of Virginia was originally going to succumb to Cuccinelli’s request. In a statement released to the press last Thursday the university said it was “required by law to comply.”

Shortly afterward, the University of Virginia Faculty Senate Executive Council issued its own statement, which ends:

We maintain that peer review by the scientific community is the appropriate means by which to identify error in the generation, presentation and interpretation of scientific data. The Attorney General’s use of his power to issue a CID under the provisions of Virginia’s FATA is an inappropriate way to engage with the process of scientific inquiry. His action and the potential threat of legal prosecution of scientific endeavor that has satisfied peer-review standards send a chilling message to scientists engaged in basic research involving Earth’s climate and indeed to scholars in any discipline. Such actions directly threaten academic freedom and, thus, our ability to generate the knowledge upon which informed public policy relies.

This was shortly followed by a joint letter to the university from the American Civil Liberties Union and the American Association of University Professors asking the University of Virginia to follow procedures to appeal the subpoena.

The letters seem to have had some effect: The Washington Post reported that the university is now “considering” its options before the Friday deadline to appeal is up.

State Senator Donald McEachin issued a statement, in which he stated he will submit a bill so that in the future the attorney general cannot issue a subpoena without also issuing a lawsuit.

“This is not only ludicrous and frivolous, wasting more taxpayer dollars and trampling on academic freedom, but the Attorney General has deprived Mr. Mann of his constitutional rights,” said McEachin.

Part of a bigger trend

On Friday, although it was put together before Cuccinelli issued his subpoena, Science published a letter by 255 members of the National Academy of Sciences, decrying “political assaults” against climate scientists and “McCarthy-like threats of criminal prosecution” and spelling out again the basic facts of what we know about the changing climate.

The letter was triggered by veiled threats from Senator James Inhofe, a well-known climate-change denier, to criminally investigate scientists over their research, and the political response to the CRU e-mails.

According to Peter Gleick, president of the Pacific Institute, a research center in Oakland, California—who spoke with New York Times reporter Sindya N. Bhanoo—before the NAS members gave the letter to Science, the group had first submitted it to the Wall Street Journal, the New York Times, and the Washington Post, all of whom declined to run it.

>Marcelo Leite: Águas turvas (FSP)

>
“Preconceitos, estridência, falácias, invenções e estatísticas, aliás, transformam todo o debate público numa bacia amazônica de turbidez. Não é privilégio da questão indígena. Tome a usina hidrelétrica de Belo Monte. Ou o tema explosivo da disponibilidade de terras para o agronegócio”

Marcelo Leite
Folha de S.Paulo, 09/05/2010 – reproduzido no Jornal de Ciência (JC e-mail 4006)

Por uma dessas coincidências sintomáticas que a época produz, duas frases que abrem a reportagem de capa da presente edição do caderno Mais! – “No Brasil todo mundo é índio, exceto quem não é” e “Só é índio quem se garante” – estão no centro de um bate-boca entre seu autor, o antropólogo Eduardo Viveiros de Castro, e a revista “Veja”.

A abertura foi escrita antes do quiproquó, mas pouco importa. Se ela e todo o texto sobre educação indígena forem recebidos como tomada de posição, tanto melhor.

De qualquer maneira, é instrutivo ler a reportagem da revista que deu origem a tudo, assim como as réplicas e tréplicas que se seguiram. Permite vislumbrar a profundidade dos preconceitos anti-indígenas e da estridência jornalística que turvam essa vertente de discussão no país.

Preconceitos, estridência, falácias, invenções e estatísticas, aliás, transformam todo o debate público numa bacia amazônica de turbidez. Não é privilégio da questão indígena. Tome a usina hidrelétrica de Belo Monte. Ou o tema explosivo da disponibilidade de terras para o agronegócio, epicentro da indigitada reportagem da revista “Veja”.

“Áreas de preservação ecológica, reservas indígenas e supostos antigos quilombos abarcam, hoje, 77,6% da extensão do Brasil”, afirmam seus autores, sem citar a fonte. “Se a conta incluir também os assentamentos de reforma agrária, as cidades, os portos, as estradas e outras obras de infraestrutura, o total alcança 90,6% do território nacional.”

É provável que a origem omitida seja o estudo “Alcance Territorial da Legislação Ambiental e Indigenista”, encomendado à Embrapa Monitoramento por Satélite pela Presidência da República e encampado pela Confederação Nacional da Agricultura e Pecuária do Brasil (CNA, leia-se senadora Kátia Abreu, DEM-TO). Seu coordenador foi o então chefe da unidade da Embrapa, Evaristo Eduardo de Miranda. A estimativa terminou bombardeada por vários especialistas, inclusive do Instituto Nacional de Pesquisas Espaciais (Inpe).

Nesta semana veio à luz, graças às repórteres Afra Balazina e Andrea Vialli, mais um levantamento que contradiz a projeção alarmante. O novo estudo foi realizado por Gerd Sparovek, da Escola Superior de Agricultura Luiz de Queiroz (Esalq-USP), em colaboração com a Universidade de Chalmers (Suécia).

Para Miranda, se toda a legislação ambiental, fundiária e indigenista fosse cumprida à risca, faltariam 334 mil km2 – 4% do território do Brasil – para satisfazer todas as suas exigências. O valor dá quase um Mato Grosso do Sul de deficit.

Para Sparovek, mesmo que houvesse completa obediência ao Código Florestal ora sob bombardeio de ruralistas, sobraria ainda 1 milhão de km2, além de 600 mil km2 de pastagens poucos produtivas usadas para pecuária extensiva (um boi por hectare). Dá 4,5 Mato Grosso do Sul de superavit.

A disparidade abissal entre as cifras deveria bastar para ensopar as barbas de quem acredita em neutralidade científica, ou a reivindica. Premissas, interpretações da lei e fontes de dados diversas decerto explicam o hiato.

Mas quem as examina a fundo, entrando no mérito e extraindo conclusões úteis para o esclarecimento do público e a tomada de decisão? Faltam pessoas e instituições, no Brasil, com autoridade para decantar espuma e detritos, clarificando as águas para que se possa enxergar o fundo. De blogueiros e bucaneiros já estamos cheios.

>Acelerador de gente (FSP)

>
Entrevista com Karin Knorr Cetina

José Galisi-Filho
Folha de SP, 2/5/2010 – reproduzido no Jornal da Ciência (JC e-mail 4001)

Socióloga que estudou os pesquisadores do LHC diz que experimento elimina noções tradicionais de autoria e prestígio

Ao visitar o LHC (Grande Colisor de Hádrons) em abril de 2008, o físico escocês Peter Higgs pôde contrastar sua dimensão humana com a escala gigantesca da maior máquina já construída pela humanidade.

Se a hipótese de Higgs estiver correta, os dados que começaram a jorrar nas últimas semanas do LHC fornecerão a última peça no quebra-cabeças do modelo padrão, a teoria da física que explica a matéria. Mas a saga do LHC é resultado do trabalho de gerações de pesquisadores, cujos nomes finalmente se diluirão na “simbiose homem-máquina” de um novo paradigma, pela primeira vez realmente global, de cooperação cientifica.

Para Karin Knorr Cetina, professora de sociologia do conhecimento da Universidade de Konstanz, Alemanha, o experimento é, antes de tudo, um “laboratório humano” numa escala sem precedentes na história da ciência moderna.

Cetina passou 30 anos observando os pesquisadores do Cern (Centro Europeu de Física Nuclear), laboratório na Suíça que abriga o LHC, numa espécie de estudo “etnológico” da tribo dos físicos, seus usos e costumes. Segundo ela, noções tradicionais na ciência, como carreira, prestigio e autoria, deixam de ter qualquer significado no modelo de produção de conhecimento do Cern.

Da Universidade de Chicago, EUA, onde é pesquisadora visitante, Cetina falou à Folha:

– O que há de novo na forma de produzir conhecimento no Cern, e como isso se compara com as humanidades?

O novo é a dimensão, a duração e o caráter global do experimento. A estrutura dos experimentos é um experimento em si mesmo, com um caráter antecipatório de um tempo global e de uma sociedade do conhecimento. Poderíamos, talvez, fazer uma comparação com aquele espírito arrojado e inovador no desenvolvimento do supersônico Concorde nos anos 1960, que sinalizou uma ruptura de época. Mas não se pode responder com uma simples frase ao “como” esse experimento é coordenado.

Há muitos mecanismos particulares que sustentam o projeto e o transformam numa espécie de “superorganismo”, na íntima colaboração de mais de 2.000 físicos com o gigantesco LHC, que eles mesmo projetaram e no qual, finalmente, trabalham juntos. Um mecanismo muito importante são as publicações coletivas em ordem alfabética. Quem é privilegiado não é o “gênio”, o autor, ou pesquisadores destacados em suas áreas. Um outro mecanismo é que o experimento mesmo, e não os autores, é “convidado” para as conferências internacionais.

Os atores individuais são apenas os representantes daquilo que produziram em conjunto. Um outro mecanismo é que os participantes se encontram, por exemplo, durante toda uma semana no Cern, e esses encontros são organizados de tal maneira que todos possam e devam ser informados sobre tudo que ocorre. Estabelece-se, assim, uma espécie de consciência coletiva do “conhecimento compartilhado”.

Como poderíamos comparar isso com as ciências humanas? Alguns diagnósticos de época importantes, de historiadores e filósofos, por exemplo, ainda encontram ressonância na opinião pública, mas, infelizmente, a estrutura e a segmentação da pesquisa nesse campo do conhecimento não tem mais nada de interessante a oferecer. A sociologia tradicional não sinaliza mais para a frente.

– Depois de muitos anos de pesquisa de campo em laboratórios como uma etnógrafa da ciência, como se diferenciam as culturas científicas diante do papel do indivíduo?

A biologia molecular, que acompanhei por muitos anos, é uma ciência “de bancada”, na qual, por regra, poucos pesquisadores trabalham juntos, na qual também se produz e publica em coletivo, mas não em ordem alfabética. O papel do pesquisador individual ainda permanece importante. Isso leva, como sabemos, a conflitos em torno de autoria e quem está em que posição na publicação. A física de altas energias procura, em contrapartida, liberar a cooperação, na qual é o conjunto que está no ponto central. O fio condutor não é mais a carreira, mas o resultado cientifico. O acelerador é o elemento dominante, pois ele somente pode ser construído e avaliado por muitos.

– Seria a natureza mesma do projeto incompatível com um novo “insight” individual que poderia mudar tudo de forma imprevisível?

É bem mais provável, no caso do Cern, que a pesquisa em equipe deva produzir excelentes resultados empíricos. Muitos pesquisadores em sociologia e nas humanidades, de maneira geral, produzem resultados parciais, fragmentados, que não se agregam dentro de um sistema numa perspectiva cumulativa -não porque a natureza do social seja fragmentada, mas porque nossa maneira de conduzir pesquisas, nossas convenções de pesquisa, não se agregam. Em muitas ciências empíricas devemos investigar no processo cooperativo -já que na natureza todas as partes de uma sistema se interrelacionam- ou todo o sistema ou saber qual é, realmente, a parte central desse sistema que deve ser isolada e destacada. Esse reducionismo experimental não pode ser levado a cabo na ciência social por motivos éticos, por se tratar de pessoas em sua integridade, que não podemos reduzir a células de cultura. Para tanto, seria necessário muito mais cooperação e pesquisa.

>Testes genéticos e a compreensão popular dos fenômenos probabilísticos

>
Teste genético chega à farmácia

Fernando Eichenberg
O Globo, 13/5/2010 – reproduzido no Jornal da Ciência (JC e-mail 4009)

EUA começam a vender polêmicos exames para detectar risco de doenças

A partir de na sexta-feira (14/5), além de aspirina, vitamina C ou pasta de dente, os americanos poderão comprar na farmácia da esquina testes genéticos personalizados, que prometem indicar o risco de contrair 23 doenças, como os males de Alzheimer e Parkinson, câncer de mama, leucemia ou diabetes.

Isso será possível em cerca de 6 mil das 7,5 mil lojas da rede Walgreens, uma das duas maiores dos EUA. É a primeira vez que testes de análise de DNA estarão disponíveis para consumo de massa.

A iniciativa é da Pathway Genomics, uma empresa de San Diego. Os kits, chamados de Insight Saliva Collection, ao preço unitário entre US$ 20 e US$ 30, têm um recipiente de plástico e um envelope padrão, pelo qual a saliva coletada é enviada a um dos laboratórios para análise. Após a remessa, é necessário pagar um valor adicional, no site da empresa, pelos tipos de testes de DNA desejados.

Por US$ 79, são avaliadas as reações do organismo a substâncias como cafeína, drogas de redução do colesterol ou tamoxifeno, usado no tratamento do câncer de mama. Por US$ 179, futuros pais poderão conhecer a probabilidade de serem portadores de 23 problemas genéticos, como diabetes ou talassemia beta (um tipo de anemia), passíveis de transmissão aos seus filhos. Pelo mesmo preço, são testados riscos pessoais referentes a ataques cardíacos, câncer do pulmão, leucemia ou esclerose múltipla. Para aqueles dispostos a desembolsar ainda mais, por US$ 249 é possível fazer todos os testes disponíveis.

Especialista diz que testes são nocivos

A chegada dos testes genéticos às farmácias causou polêmica. A Pathway Genomics alega que, embora não sejam definitivos, os resultados fornecidos pelas análises poderão estimular as pessoas a mudar de hábitos e a adotar atitudes mais saudáveis em suas vidas. Nem todos concordam. Para Hank Greely, diretor do Centro de Direito e de Biociências da Universidade de Stanford, trata-se de uma péssima ideia. Segundo ele, para a grande maioria das pessoas, as informações genéticas não serão de grande valia e poderão, inclusive, ser mal interpretadas, provocando sérios riscos. Ele dá o exemplo de uma mulher que descobre, pelos testes, não ter a mutação nos genes relacionada ao câncer de mama.

– Ela pode concluir que está salva. Mas o que isso significa? Que você não tem 70% de chances de ter câncer de mama, mas ainda está acima da média de risco de 12%. Ela poderá decidir parar de fazer exames, o que será um enorme erro – explica.

Greely cita também o caso de alguém que é informado possuir o dobro de propensão para sofrer da doença de Alzheimer.

– Talvez essa pessoa não vá cometer suicídio, mas poderá ter sua vida alterada com esse dado. Ela não irá perceber o fato de que ter 20% em vez de 10% de possibilidade de sofrer de Alzheimer significa que em 80% do tempo ela viverá como alguém que não terá a doença – pondera.

Para o especialista, um dos maiores perigos dessa nova iniciativa é a falta de aconselhamento médico na hora de fornecer os resultados.

– Por US$ 99 a hora, você pode telefonar para um número que está no site da empresa para fazer consultas sobre o seu teste, mas é claro que a maioria das pessoas não vai pagar por isso – ressalta.

Por essa razão, embutida em sua legislação, o Estado de Nova York não permitirá que os kits sejam vendidos nas farmácias locais. O FDA, órgão do governo americano que regulamenta o mercado de remédios e alimentos, anunciou que está avaliando o caso e dará um parecer em breve. Mas a ideia já inspirou a CVS, cadeia de farmácias concorrente da Walgreens, que promete também vender seus kits genéticos a partir de agosto.

>Weathermen, and other climate change skeptics (The New Yorker)

>
Comment
Up in the Air

by Elizabeth Kolbert – April 12, 2010

Joe Bastardi, who goes by the title “expert senior forecaster” at AccuWeather, has a modest proposal. Virtually every major scientific body in the world has concluded that the planet is warming, and that greenhouse-gas emissions are the main cause. Bastardi, who holds a bachelor’s degree in meteorology, disagrees. His theory, which mixes volcanism, sunspots, and a sea-temperature trend known as the Pacific Decadal Oscillation, is that the earth is actually cooling. Why don’t we just wait twenty or thirty years, he proposes, and see who’s right? This is “the greatest lab experiment ever,” he said recently on Bill O’Reilly’s Fox News show.

Bastardi’s position is ridiculous (which is no doubt why he’s often asked to air it on Fox News). Yet there it was on the front page of the Times last week. Among weathermen, it turns out, views like Bastardi’s are typical. A survey released by researchers at George Mason University found that more than a quarter of television weathercasters agree with the statement “Global warming is a scam,” and nearly two-thirds believe that, if warming is occurring, it is caused “mostly by natural changes.” (The survey also found that more than eighty per cent of weathercasters don’t trust “mainstream news media sources,” though they are presumably included in this category.)

Why, with global warming, is it always one step forward, two, maybe three steps back? A year ago, it looked as if the so-called climate debate might finally be over, and the business of actually addressing the problem about to begin. In April, the Obama Administration designated CO2 a dangerous pollutant, thus taking the first critical step toward regulating carbon emissions. The following month, the Administration announced new fuel-efficiency standards for cars. (These rules were finalized last week.) In June, the House of Representatives passed a bill, named for its co-sponsors, Edward Markey and Henry Waxman, that called for reducing emissions seventeen per cent by 2020. Speaking in September at the United Nations, the President said that a “new era” had dawned. “We understand the gravity of the climate threat,” he declared. “We are determined to act.”

Then, much like the Arctic ice cap, that “new era” started to fall to pieces. The U.N. climate summit in Copenhagen in December broke up without agreement even on a possible outline for a future treaty. A Senate version of the Markey-Waxman bill failed to materialize and, it’s now clear, won’t be materializing anytime this year. (Indeed, the one thing that seems certain not to be in a Senate energy bill is the economy-wide emissions reduction required by the House bill.) Last week, despite the Senate’s inaction, President Obama announced that he was opening huge swaths of the Atlantic and Alaskan coasts to oil drilling. The White House billed the move as part of a “comprehensive energy strategy,” a characterization that, as many commentators pointed out, made no sense, since comprehensiveness is precisely what the President’s strategy lacks. As Josh Nelson put it on the blog EnviroKnow, “Obama is either an exceptionally bad negotiator, or he actually believes in some truly awful policy ideas. Neither of these possibilities bodes well.”

As lawmakers dither, public support for action melts away. In a Gallup poll taken last month, forty-eight per cent of respondents said that they believe the threat of global warming to be “generally exaggerated.” This figure was up from thirty-five per cent just two years ago. According to the same poll, only fifty-two per cent of Americans believe that “most scientists believe that global warming is occurring,” down from sixty-five per cent in 2008.

The most immediate explanation for this disturbing trend is the mess that’s come to be known as Climategate. Here the situation is the reverse of what’s going on in the troposphere: Climategate really is a hyped-up media phenomenon. Late last year, hackers broke into the computer system at the Climatic Research Unit of Britain’s University of East Anglia and posted online hundreds of private e-mails from scientists. In the e-mails, C.R.U. researchers often express irritation with their critics—the death of one detractor is described as “cheering news”—and discuss ways to dodge a slew of what they consider to be nuisance Freedom of Information requests. The e-mails were widely portrayed in the press and in the blogosphere as evidence of a conspiracy to misrepresent the data. But, as a parliamentary committee appointed to investigate the matter concluded last week, this charge is so off base that it is difficult even to respond to: “Insofar as the committee was able to consider accusations of dishonesty against CRU, the committee considers that there is no case to answer.”

The e-mail brouhaha was followed by—and immediately confused with—another overblown controversy, about a mistake in the second volume of the U.N. Intergovernmental Panel on Climate Change’s Fourth Assessment Report, from 2007. On page 493 of the nine-hundred-and-seventy-six-page document, it is asserted, incorrectly, that the Himalayan glaciers could disappear by 2035. (The report cites as a source for this erroneous information a report by the World Wildlife Fund.) The screw-up, which was soon acknowledged by the I.P.C.C. and the W.W.F., was somehow transformed by commentators into a reason to doubt everything in the three-volume assessment, including, by implication, the basic laws of thermodynamics. The “new scandal (already awarded the unimaginative name of ‘Glaciergate’) raises further challenges for a scientific theory that is steadily losing credibility,” James Heiser wrote on the Web site of the right-wing magazine New American.

No one has ever offered a plausible account of why thousands of scientists at hundreds of universities in dozens of countries would bother to engineer a climate hoax. Nor has anyone been able to explain why Mother Nature would keep playing along; despite what it might have felt like in the Northeast these past few months, globally it was one of the warmest winters on record.

The message from scientists at this point couldn’t be clearer: the world’s emissions trajectory is extremely dangerous. Goofball weathermen, Climategate, conspiracy theories—these are all a distraction from what’s really happening. Which, apparently, is what we’re looking for. ♦

Read more: http://www.newyorker.com/talk/comment/2010/04/12/100412taco_talk_kolbert#ixzz0lBqmFCnu

>Metáfora religiosa ajuda a entender a ciência? Bóson de Higgs

>
Em busca da “Partícula de Deus”

Redação do Site Inovação Tecnológica – 02/04/2007

Atlas era um dos titãs da mitologia grega, condenado para sempre a sustentar os céus sobre os ombros. Aqui, Atlas é um dos quatro gigantescos detectores que farão parte do maior acelerador de partículas do mundo, o LHC, que está em fase adiantada de testes e deverá entrar em operação nos próximos meses.

LHC é uma sigla para “Large Hadron Collider”, ou gigantesco colisor de prótons. Parece difícil exagerar as grandezas desse laboratório que está sendo construído a 100 metros de profundidade, na fronteira entre a França e a Suíça. A estrutura completa tem a forma de um anel, construída ao longo de um túnel com 27 quilômetros de circunferência.

As partículas são aceleradas por campos magnéticos ao longo dessa órbita de 27 Km, até atingir altíssimos níveis de energia. Mais especificamente, 7 trilhões de volts. Em quatro pontos do anel, sob temperaturas apenas levemente superiores ao zero absoluto, as partículas se chocam, produzindo uma chuva de outras partículas, recriando um ambiente muito parecido com as condições existentes instantes depois do Big Bang.

Nesses quatro pontos estão localizados quatro detectores. O Atlas, mostrado na foto nas suas etapas finais de montagem, é um deles. O Atlas, assim como o segundo detector, o CMS (“Compact Muon Detector”), é um detector genérico, capaz de detectar qualquer tipo de partícula, inclusive partículas ainda desconhecidas ou não previstas pela teoria. Já o LHCb e o ALICE são detectores “dedicados”, construídos para o estudo de fenômenos físicos específicos.

Bóson de Higgs

Quando os prótons se chocam no centro dos detectores as partículas geradas espalham-se em todas as direções. Para capturá-las, o Atlas e o CMS possuem inúmeras camadas de sensores superpostas, que deverão verificar as propriedades dessas partículas, medir suas energias e descobrir a rota que elas seguem.

O maior interesse dos cientistas é descobrir o Bóson de Higgs, a única peça que falta para montar o quebra-cabeças que explicaria a “materialidade” do nosso universo. Por muito tempo se acreditou que os átomos fossem a unidade indivisível da matéria. Depois, os cientistas descobriram que o próprio átomo era resultado da interação de partículas ainda mais fundamentais. E eles foram descobrindo essas partículas uma a uma. Entre quarks e léptons, férmions e bósons, são 16 partículas fundamentais: 12 partículas de matéria e 4 partículas portadoras de força.

A Partícula de Deus

O problema é que, quando consideradas individualmente, nenhuma dessas partículas tem massa. Ou seja, depois de todos os avanços científicos, ainda não sabemos o que dá “materialidade” ao nosso mundo. O Modelo Padrão, a teoria básica da Física que explica a interação de todas as partículas subatômicas, coloca todas as fichas no Bóson de Higgs, a partícula fundamental que explicaria como a massa se expressa nesse mar de energias. É por isso que os cientistas a chamam de “Partícula de Deus”.

O Modelo Padrão tem um enorme poder explicativo. Toda a nossa ciência e a nossa tecnologia foram criadas a partir dele. Mas os cientistas sabem de suas deficiências. Essa teoria cobre apenas o que chamamos de “matéria ordinária”, essa matéria da qual somos feitos e que pode ser detectada por nossos sentidos.

Mas, se essa teoria não explica porque temos massa, fica claro que o Modelo Padrão consegue dar boas respostas sobre como “a coisa funciona”, mas ainda se cala quando a pergunta é “o que é a coisa”. O Modelo Padrão também não explica a gravidade. E não pretende dar conta dos restantes 95% do nosso universo, presumivelmente preenchidos por outras duas “coisas” que não sabemos o que são: a energia escura e a matéria escura.

É por isso que se coloca tanta fé na Partícula de Deus. Ela poderia explicar a massa de todas as demais partículas. O próprio Bóson de Higgs seria algo como um campo de energia uniforme. Ao contrário da gravidade, que é mais forte onde há mais massa, esse campo energético de Higgs seria constante. Desta forma, ele poderia ser a fonte não apenas da massa da matéria ordinária, mas a fonte da própria energia escura.

Em dois ou três anos saberemos se a teoria está correta ou não. Ou, talvez, nos depararemos com um mundo todo novo, que exigirá novas teorias, novos equipamentos e novas descobertas.

>Referência em previsões climáticas

>
Agência FAPESP – 26/3/2010

O Centro de Previsão de Tempo e Estudos Climáticos (CPTEC), do Instituto Nacional de Pesquisas Espaciais (Inpe), passará a integrar um seleto grupo de centros mundiais de previsão climática sazonal.

O centro foi recomendado pela Comissão para Sistemas Básicos da Organização Meteorológica Mundial (OMM) como um Global Producing Center (GPC) ou Centro Produtor Global de previsões de longo prazo.

Após confirmação da OMM – prevista para junho deste ano – o CPTEC receberá um selo de qualidade para suas previsões climáticas sazonais. Segundo avaliação dos especialistas da OMM, o CPTEC atende a vários critérios, com destaque para a metodologia empregada, disseminação de produtos de previsão na internet e existência de um ciclo operacional fixo de previsão climática sazonal.

Em contrapartida, o centro passará a participar de atividades internacionais da OMM, contribuindo com o Centro de Verificação de Previsão de Longo Prazo. Desde 2006, a OMM, por meio do Programa Global de Processamento de Dados e Sistemas de Previsão, passou a atestar a qualidade dos centros de pesquisa e de previsão climática que atendam a determinados quesitos, intitulando-os GPCs de previsões de longo prazo.

Com a recomendação, o CPTEC passará também a integrar um grupo de centros mundiais de previsão climática sazonal, como os National Centers for Environmental Prediction (Estados Unidos), o European Centre for Medium-Range Weather Forecasts (União Europeia), o UK Met Office (Reino Unido), o Meteo-France, o Metorological Service of Canada, o Bureau of Meteorology da Austrália e o Japan Meteorological Agency, entre outros.

>Understanding Scientific Terms About Climate Change

>
Certainty vs. Uncertainty

Union of Concerned Scientists – http://www.ucsusa.org
Last Revised: 03/17/10

Uncertainty is ubiquitous in our daily lives. We are uncertain about where to go to college, when and if to get married, who will play in the World Series, and so on.

To most of us, uncertainty means not knowing. To scientists, however, uncertainty is how well something is known. And, therein lies an important difference, especially when trying to understand what is known about climate change.

In science, there’s no such thing as absolute certainty. But, research reduces uncertainty. In many cases, theories have been tested and analyzed and examined so thoroughly that their chance of being wrong is infinitesimal. Other times, uncertainties linger despite lengthy research. In those cases, scientists make it their job to explain how well something is known. When gaps in knowledge exist, scientists qualify the evidence to ensure others don’t form conclusions that go beyond what is known.

Even though it may seem counterintuitive, scientists like to point out the level of uncertainty. Why? Because they want to be as transparent as possible and it shows how well certain phenomena are understood. Scientists have even developed their own phrasing regarding uncertainty, such as “very high confidence” (9 out of 10 chances of being correct) about a certain fact and “very likely” (90 chances out of 100) to describe the chance of an outcome.

Decision makers in our society use scientific input all the time. But they could make a critically wrong choice if the unknowns aren’t taken into account. For instance, city planners could build a levee too low or not evacuate enough coastal communities along an expected landfall zone of a hurricane if uncertainty is understated. For these reasons, uncertainty plays a key role in informing public policy.

However, this culture of transparency has caused problems for climate change science. Climate change deniers link certainty projections with not knowing anything. The truth is, science knows much about climate change. We have learned, for example, that the burning of fossil fuels and the clearing or burning of land creates carbon dioxide (CO2), which is released into the atmosphere. There is no uncertainty about this. We have learned that carbon dioxide and other greenhouse gases build up in the atmosphere and trap heat through the greenhouse effect.

Again, there is no uncertainty about this. Earth is warming, and scientists are very certain that humans are the main reason for the world’s temperature increase in the past 50 years.

Scientists know with very high confidence, or even greater certainty, that:

  • Human-induced warming influences physical and biological systems throughout the world
  • Sea levels are rising
  • Glaciers and permafrost are shrinking
  • Oceans are becoming more acidic
  • Ranges of plants and animals are shifting

Scientists are uncertain, however, about how much global warming will occur in the future (between 2.1 degrees and 11 degrees Fahrenheit by 2100). They are also uncertain how soon the sea ice habitat where the ringed seal lives will disappear. Curiously, much of this uncertainty has to do with—are you ready?—humans. The choices we make in the next decade, or so, to reduce emissions of heat-trapping gasses could prevent catastrophic climate change.

So, what’s the bottom line? Science has learned much about climate change. Science tells us what is more or less likely to be true. The latest climate science underscores that there’s an urgent need to reduce heat-trapping emissions. And that is certain.

Table: Language to describe confidence about facts and the likelihood of an outcome.  SOURCE: IPCC WGI (2007).

Terminology for describing confidence about facts
Very High confidence At least 9 out of 10 chance of being correct
High confidence About 8 out of 10 chance
Medium confidence About 5 out of 10 chance
Low confidence About 2 out of 10 chance
Very low confidence Less than 1 out of 10 chance

Terminology for describing likelihood of an outcome
Virtually certain More than 99 chances out of 100
Extremely likely More than 95 chances out of 100
Very likely More than 90 chances out of 100
Likely More than 65 chances out of 100
More likely than not More than 50 chances out of 100


>The clouds of unknowing (The Economist)

>
The science of climate change

There are lots of uncertainties in climate science. But that does not mean it is fundamentally wrong

Mar 18th 2010 | From The Economist print edition

FOR anyone who thinks that climate science must be unimpeachable to be useful, the past few months have been a depressing time. A large stash of e-mails from and to investigators at the Climatic Research Unit of the University of East Anglia provided more than enough evidence for concern about the way some climate science is done. That the picture they painted, when seen in the round—or as much of the round as the incomplete selection available allows—was not as alarming as the most damning quotes taken out of context is little comfort. They offered plenty of grounds for both shame and blame.

At about the same time, glaciologists pointed out that a statement concerning Himalayan glaciers in the most recent report of the Intergovernmental Panel on Climate Change (IPCC) was wrong. This led to the discovery of other poorly worded or poorly sourced claims made by the IPCC, which seeks to create a scientific consensus for the world’s politicians, and to more general worries about the panel’s partiality, transparency and leadership. Taken together, and buttressed by previous criticisms, these two revelations have raised levels of scepticism about the consensus on climate change to new heights.

Increased antsiness about action on climate change can also be traced to the recession, the unedifying spectacle of last December’s climate-change summit in Copenhagen, the political realities of the American Senate and an abnormally cold winter in much of the northern hemisphere. The new doubts about the science, though, are clearly also a part of that story. Should they be?

In any complex scientific picture of the world there will be gaps, misperceptions and mistakes. Whether your impression is dominated by the whole or the holes will depend on your attitude to the project at hand. You might say that some see a jigsaw where others see a house of cards. Jigsaw types have in mind an overall picture and are open to bits being taken out, moved around or abandoned should they not fit. Those who see houses of cards think that if any piece is removed, the whole lot falls down. When it comes to climate, academic scientists are jigsaw types, dissenters from their view house-of-cards-ists.

The defenders of the consensus tend to stress the general consilience of their efforts—the way that data, theory and modelling back each other up. Doubters see this as a thoroughgoing version of “confirmation bias”, the tendency people have to select the evidence that agrees with their original outlook. But although there is undoubtedly some degree of that (the errors in the IPCC, such as they are, all make the problem look worse, not better) there is still genuine power to the way different arguments and datasets in climate science tend to reinforce each other.

The doubters tend to focus on specific bits of empirical evidence, not on the whole picture. This is worthwhile—facts do need to be well grounded—but it can make the doubts seem more fundamental than they are. People often assume that data are simple, graspable and trustworthy, whereas theory is complex, recondite and slippery, and so give the former priority. In the case of climate change, as in much of science, the reverse is at least as fair a picture. Data are vexatious; theory is quite straightforward. Constructing a set of data that tells you about the temperature of the Earth over time is much harder than putting together the basic theoretical story of how the temperature should be changing, given what else is known about the universe in general.

Absorb and reflect

The most relevant part of that universal what-else is the requirement laid down by thermodynamics that, for a planet at a constant temperature, the amount of energy absorbed as sunlight and the amount emitted back to space in the longer wavelengths of the infra-red must be the same. In the case of the Earth, the amount of sunlight absorbed is 239 watts per square metre. According to the laws of thermodynamics, a simple body emitting energy at that rate should have a temperature of about –18ºC. You do not need a comprehensive set of surface-temperature data to notice that this is not the average temperature at which humanity goes about its business. The discrepancy is due to greenhouse gases in the atmosphere, which absorb and re-emit infra-red radiation, and thus keep the lower atmosphere, and the surface, warm (see the diagram below). The radiation that gets out to the cosmos comes mostly from above the bulk of the greenhouse gases, where the air temperature is indeed around –18ºC.

Adding to those greenhouse gases in the atmosphere makes it harder still for the energy to get out. As a result, the surface and the lower atmosphere warm up. This changes the average temperature, the way energy moves from the planet’s surface to the atmosphere above it and the way that energy flows from equator to poles, thus changing the patterns of the weather.

No one doubts that carbon dioxide is a greenhouse gas, good at absorbing infra-red radiation. It is also well established that human activity is putting more of it into the atmosphere than natural processes can currently remove. Measurements made since the 1950s show the level of carbon dioxide rising year on year, from 316 parts per million (ppm) in 1959 to 387ppm in 2009. Less direct records show that the rise began about 1750, and that the level was stable at around 280ppm for about 10,000 years before that. This fits with human history: in the middle of the 18th century people started to burn fossil fuels in order to power industrial machinery. Analysis of carbon isotopes, among other things, shows that the carbon dioxide from industry accounts for most of the build-up in the atmosphere.

The serious disagreements start when discussion turns to the level of warming associated with that rise in carbon dioxide. For various reasons, scientists would not expect temperatures simply to rise in step with the carbon dioxide (and other greenhouse gases). The climate is a noisy thing, with ups and downs of its own that can make trends hard to detect. What’s more, the oceans can absorb a great deal of heat—and there is evidence that they have done so—and in storing heat away, they add inertia to the system. This means that the atmosphere will warm more slowly than a given level of greenhouse gas would lead you to expect.

There are three records of land-surface temperature put together from thermometer readings in common use by climatologists, one of which is compiled at the Climatic Research Unit of e-mail infamy. They all show warming, and, within academia, their reliability is widely accepted. Various industrious bloggers are not so convinced. They think that adjustments made to the raw data introduce a warming bias. They also think the effects of urbanisation have confused the data because towns, which are sources of heat, have grown up near weather stations. Anthony Watts, a retired weather forecaster who blogs on climate, has set up a site, surfacestations.org, where volunteers can help record the actual sites of weather instruments used to provide climate data, showing whether they are situated close to asphalt or affected by sources of bias.

Those who compile the data are aware of this urban heat-island effect, and try in various ways to compensate for it. Their efforts may be insufficient, but various lines of evidence suggest that any errors it is inserting are not too bad. The heat-island effect is likely to be strongest on still nights, for example, yet trends from data recorded on still nights are not that different from those from windy ones. And the temperature of waters at the surface of the seas shows similar trends to that on land over the past century, as does the record of air temperature over the oceans as measured at night (see chart 1).

A recent analysis by Matthew Menne and his colleagues at America’s National Oceanic and Atmospheric Administration, published in the Journal of Geophysical Research, argued that trends calculated from climate stations that surfacestation.org found to be poorly sited and from those it found well sited were more or less indistinguishable. Mr Watts has problems with that analysis, and promises a thorough study of the project’s findings later.

There is undoubtedly room for improvement in the surface-temperature record—not least because, at the moment, it provides only monthly mean temperatures, and there are other things people would like to know about. (When worrying about future heatwaves, for example, hot days and nights, not hot months, are the figures of most interest.) In February Britain’s Met (ie, meteorological) Office called for the creation of a new set of temperature databases compiled in rigorously transparent ways and open to analysis and interpretation by all and sundry. Such an initiative would serve science well, help restore the credibility of land-surface records, and demonstrate an openness on the part of climate science which has not always been evident in the past.

Simplify and amplify

For many, the facts that an increase in carbon dioxide should produce warming, and that warming is observed in a number of different indicators and measurements, add up to a primafacie case for accepting that greenhouse gases are warming the Earth and that the higher levels of greenhouse gases that business as usual would bring over the course of this century would warm it a lot further.

The warming caused by a given increase in carbon dioxide can be calculated on the basis of laboratory measurements which show how much infra-red radiation at which specific wavelengths carbon dioxide molecules absorb. This sort of work shows that if you double the carbon dioxide level you get about 1ºC of warming. So the shift from the pre-industrial 280ppm to 560ppm, a level which on current trends might be reached around 2070, makes the world a degree warmer. If the level were to double again, to 1,100ppm, which seems unlikely, you would get another degree.

The amount of warming expected for a doubling of carbon dioxide has become known as the “climate sensitivity”—and a climate sensitivity of one degree would be small enough to end most climate-related worries. But carbon dioxide’s direct effect is not the only thing to worry about. Several types of feedback can amplify its effect. The most important involve water vapour, which is now quite well understood, and clouds, which are not. It is on these areas that academic doubters tend to focus.

As carbon dioxide warms the air it also moistens it, and because water vapour is a powerful greenhouse gas, that will provide further warming. Other things people do—such as clearing land for farms, and irrigating them—also change water vapour levels, and these can be significant on a regional level. But the effects are not as large.

Climate doubters raise various questions about water vapour, some trivial, some serious. A trivial one is to argue that because water vapour is such a powerful greenhouse gas, carbon dioxide is unimportant. But this ignores the fact that the level of water vapour depends on temperature. A higher level of carbon dioxide, by contrast, governs temperature, and can endure for centuries.

A more serious doubting point has to do with the manner of the moistening. In the 1990s Richard Lindzen, a professor of meteorology at the Massachusetts Institute of Technology, pointed out that there were ways in which moistening might not greatly enhance warming. The subsequent two decades have seen much observational and theoretical work aimed at this problem. New satellites can now track water vapour in the atmosphere far better than before (see chart 2). As a result preliminary estimates based on simplifications have been shown to be reasonably robust, with water-vapour feedbacks increasing the warming to be expected from a doubling of carbon dioxide from 1ºC without water vapour to about 1.7ºC. Dr Lindzen agrees that for parts of the atmosphere without clouds this is probably about right.

This moistening offers a helpful way to see what sort of climate change is going on. When water vapour condenses into cloud droplets it gives up energy and warms the surrounding air. This means that in a world where greenhouse warming is wetting the atmosphere, the lower parts of the atmosphere should warm at a greater rate than the surface, most notably in the tropics. At the same time, in an effect that does not depend on water vapour, an increase in carbon dioxide will cause the upper stratosphere to cool. This pattern of warming down below and cooling up on top is expected from greenhouse warming, but would not be expected if something other than the greenhouse effect was warming the world: a hotter sun would heat the stratosphere more, not less.

During the 1990s this was a point on which doubters laid considerable weight, because satellite measurements did not show the warming in the lower atmosphere that theory would predict. Over the past ten years, though, this picture has changed. To begin with, only one team was turning data from the relevant instruments that have flown on weather satellites since the 1970s into a temperature record resolved by altitude. Now others have joined them, and identified errors in the way that the calculations (which are complex and depend on a number of finicky details) were carried out. Though different teams still get different amounts and rates of warming in the lower atmosphere, there is no longer any denying that warming is seen. Stratospheric cooling is complicated by the effects of ozone depletion, but those do not seem large enough to account for the degree of cooling that has been seen there, further strengthening the case for warming by the greenhouse effect and not some other form of climate perturbation.

On top of the effect of water vapour, though, the clouds that form from it provide a further and greater source of uncertainty. On the one hand, the droplets of water of which these are made also have a strong greenhouse effect. On the other, water vapour is transparent, whereas clouds reflect light. In particular, they reflect sunlight back into space, stopping it from being absorbed by the Earth. Clouds can thus have a marked cooling effect and also a marked warming effect. Which will grow more in a greenhouse world?

Model maze

It is at this point that detailed computer models of the climate need to be called into play. These models slice the atmosphere and oceans into stacks of three-dimensional cells. The state of the air (temperature, pressure, etc) within each cell is continuously updated on the basis of what its state used to be, what is going on in adjacent cells and the greenhousing and other properties of its contents.

These models are phenomenally complex. They are also gross oversimplifications. The size of the cells stops them from explicitly capturing processes that take place at scales smaller than a hundred kilometres or so, which includes the processes that create clouds.

Despite their limitations, climate models do capture various aspects of the real world’s climate: seasons, trade winds, monsoons and the like. They also put clouds in the places where they are seen. When used to explore the effect of an increase in atmospheric greenhouse gases on the climate these models, which have been developed by different teams, all predict more warming than greenhouse gases and water-vapour feedback can supply unaided. The models assessed for the IPCC’s fourth report had sensitivities ranging from 2.1ºC to 4.4ºC. The IPCC estimated that if clouds were not included, the range would be more like 1.7ºC to 2.1ºC. So in all the models clouds amplify warming, and in some the amplification is large.

However, there are so far no compelling data on how clouds are affecting warming in fact, as opposed to in models. Ray Pierrehumbert, a climate scientist at the University of Chicago who generally has a strong way with sceptics, is happy to agree that there might be processes by which clouds rein in, rather than exaggerate, greenhouse-warming effects, but adds that, so far, few have been suggested in any way that makes sense.

Dr Lindzen and a colleague suggested a plausible mechanism in 2001. They proposed that tropical clouds in an atmosphere with more greenhouse gas might dry out neighbouring parts of the sky, making them more transparent to outgoing infra-red. The evidence Dr Lindzen brought to bear in support of this was criticised in ways convincing enough to discourage other scientists from taking the idea further. A subsequent paper by Dr Lindzen on observations that would be compatible with his ideas about low sensitivity has also suffered significant criticisms, and he accepts many of them. But having taken them on board has not, he thinks, invalidated his line of research.

Arguments based on past climates also suggest that sensitivity is unlikely to be low. Much of the cooling during the ice ages was maintained by the presence of a large northern hemisphere ice cap reflecting away a lot of sunlight, but carbon dioxide levels were lower, too. To account for all of the cooling, especially in the southern hemisphere, is most easily done with a sensitivity of temperature to carbon dioxide higher than Dr Lindzen would have it.

Before the ice age, the Earth had a little more carbon dioxide and was a good bit warmer than today—which suggests a fairly high sensitivity. More recently, the dip in global temperatures after the eruption of Mt Pinatubo in the Philippines in 1991, which inserted a layer of sunlight-diffusing sulphur particles into the stratosphere, also bolsters the case for a sensitivity near the centre of the model range—although sensitivity to a transient event and the warming that follows a slow doubling of carbon dioxide are not exactly the same sort of thing.

Logs and blogs

Moving into data from the past, though, brings the argument to one of the areas that blog-based doubters have chosen as a preferred battleground: the temperature record of the past millennium, as construed from natural records that are both sensitive to temperature and capable of precise dating. Tree rings are the obvious, and most controversial, example. Their best known use has been in a reconstruction of temperatures over the past millennium published in Nature in 1998 and widely known as the hockey stick, because it was mostly flat but had a blade sticking up at the 20th-century end. Stephen McIntyre, a retired Canadian mining consultant, was struck by the very clear message of this graph and delved into the science behind it, a process that left him and followers of his blog, Climate Audit, intensely sceptical about its value.

In 2006 a review by America’s National Research Council endorsed points Mr McIntyre and his colleagues made on some methods used to make the hockey stick, and on doubts over a specific set of tree rings. Despite this it sided with the hockey stick’s overall conclusion, which did little to stem the criticism. The fact that tree-ring records do not capture recent warming adds to the scepticism about the value of such records.

For many of Mr McIntyre’s fans (though it is not, he says, his central concern) the important thing about this work is that the hockey stick seemed to abolish the “medieval warm period”. This is a time when temperatures are held to have been as high as or higher than today’s—a warmth associated with the Norse settlement of Greenland and vineyards in England. Many climate scientists suspect this phenomenon was given undue prominence by climatologists of earlier generations with an unduly Eurocentric view of the world. There is evidence for cooling at the time in parts of the Pacific.

Doubters for the most part are big fans of the medieval warm period, and see in the climate scientists’ arguments an attempt to rewrite history so as to maximise the drama of today’s warming and minimise the possibility that natural variation might explain the 20th-century record. The possibility of more climatic variability, though, does not, in itself, mean that greenhouse warming is not happening too. And if the medieval warmth were due to some external factor, such as a slightly brighter sun, that would suggest that the climate was indeed quite sensitive.

Looking at the more recent record, logged as it has been by thermometers, you might hope it could shed light on which of the climate models is closest to being right, and thus what the sensitivity actually is. Unfortunately, other confounding factors make this difficult. Greenhouse gases are not the only climatically active ingredients that industry, farming and land clearance add to the atmosphere. There are also aerosols—particles of pollution floating in the wind. Some aerosols cool the atmosphere. Other, sootier, ones warm it. The aggregate effect, globally, is thought to be a cooling, possibly a quite strong one. But the overall history of aerosols, which are mostly short-lived, is nothing like as well known as that of greenhouse gases, and it is unlikely that any of the models are properly capturing their chemistry or their effects on clouds.

Taking aerosols into account, climate models do a pretty good job of emulating the climate trends of the 20th century. This seems odd, since the models have different sensitivities. In practice, it appears that the way the aerosols are dealt with in the models and the sensitivity of those models tend to go hand in hand; sensitive models also have strong cooling aerosol effects.

Reto Knutti of ETH Zurich, an expert on climate sensitivity, sees this as evidence that, consciously or unconsciously, aerosols are used as counterweights to sensitivity to ensure that the trends look right. This is not evidence of dishonesty, and it is not necessarily a bad thing. Since the models need to be able to capture the 20th century, putting them together in such a way that they end up doing so makes sense. But it does mean that looking at how well various models match the 20th century does not give a good indication of the climate’s actual sensitivity to greenhouse gas.

Adding the uncertainties about sensitivity to uncertainties about how much greenhouse gas will be emitted, the IPCC expects the temperature to have increased by 1.1ºC to 6.4ºC over the course of the 21st century. That low figure would sit fairly well with the sort of picture that doubters think science is ignoring or covering up. In this account, the climate has natural fluctuations larger in scale and longer in duration (such as that of the medieval warm period) than climate science normally allows, and the Earth’s recent warming is caused mostly by such a fluctuation, the effects of which have been exaggerated by a contaminated surface-temperature record. Greenhouse warming has been comparatively minor, this argument would continue, because the Earth’s sensitivity to increased levels of carbon dioxide is lower than that seen in models, which have an inbuilt bias towards high sensitivities. As a result subsequent warming, even if emissions continue full bore, will be muted too.

It seems unlikely that the errors, misprisions and sloppiness in a number of different types of climate science might all favour such a minimised effect. That said, the doubters tend to assume that climate scientists are not acting in good faith, and so are happy to believe exactly that. Climategate and the IPCC’s problems have reinforced this position.

Using the IPCC’s assessment of probabilities, the sensitivity to a doubling of carbon dioxide of less than 1.5ºC in such a scenario has perhaps one chance in ten of being correct. But if the IPCC were underestimating things by a factor of five or so, that would still leave only a 50:50 chance of such a desirable outcome. The fact that the uncertainties allow you to construct a relatively benign future does not allow you to ignore futures in which climate change is large, and in some of which it is very dangerous indeed. The doubters are right that uncertainties are rife in climate science. They are wrong when they present that as a reason for inaction.

Comments to this article here.

>What is the best way to provide people with information about climate change?

>
Nov 7th, 2009
Climate Central

There are many ways that people can benefit from having information about climate change, including being able to make informed policy and management decisions. This is one reason why people are talking about creating a national climate service. So, what functions would a national climate service provide?

A good place to start is with an organization that has a similar name and purpose—the National Weather Service, a government agency that was established in the late 1800s. The importance of the Weather Service is almost too obvious to mention. Without accurate reports about the current weather and predictions of future weather, planes would fly into thunderstorms unawares, ships would plow directly into hurricanes and typhoons, and people wouldn’t know about blizzards barreling down on them. Also, planning for pretty much any outdoor activity would become a lot more difficult. Without good weather forecasts, the losses in economic terms and in human lives would be huge.

Climate change unfolds on a slower scale—over decades rather than in hours. But now that we know it is happening, the need for forecasting how climate change will impact us has become clear as well. Knowing how much sea level is likely to rise, and how quickly, is crucial to knowing how to protect coastal areas from increased damage. Knowing how hurricane frequency and strength might change could affect building codes and evacuation strategies. Knowing how the intensity and frequency of droughts and heat waves might change would help city and regional planners manage water resources and mitigate threats to local economies.

The knowledge that these changes will come mostly from an increase in atmospheric levels of greenhouse gases could inform decisions about how to produce and use energy, and whether to develop alternative energy and other green technologies. If the world decides that limiting climate change is a priority, then this green technology could be an economic boon to the countries that perfect it.

Realizing that businesses, local governments, and individuals need the most reliable forecasts possible of how, when, and where the climate is likely to change, and what the impacts might be, universities, government agencies, and private companies have come together over the past year or so to figure out how such an entity might operate—how it would organize information and how it would deliver that information in the most useful way.

>Living on Earth: Climate Confusion and the "Climategate"

>
Air Date: March 5, 2010
http://www.loe.org

Link to the audio file.

“Climategate” has damaged the credentials of the Intergovernmental Panel on Climate Change, and decades of science on global warming. But as scientists push back against efforts to dismiss the threat of global warming, some media watchers say journalists aren’t balancing their coverage of climate change with the scientifically-sound other side of the story – that the impacts of a warming world could be worse than the IPCC predicts. Host Jeff Young talks with media experts and scientists about the fallout of the hacked email scandal, and how to repair damage. (12:00)

>Climate scientists to fight back at skeptics (The Washington Times)

>
By Stephen Dinan
The Washington Times – Friday, March 5, 2010

Undaunted by a rash of scandals over the science underpinning climate change, top climate researchers are plotting to respond with what one scientist involved said needs to be “an outlandishly aggressively partisan approach” to gut the credibility of skeptics.

In private e-mails obtained by The Washington Times, climate scientists at the National Academy of Sciences say they are tired of “being treated like political pawns” and need to fight back in kind. Their strategy includes forming a nonprofit group to organize researchers and use their donations to challenge critics by running a back-page ad in the New York Times.

“Most of our colleagues don’t seem to grasp that we’re not in a gentlepersons’ debate, we’re in a street fight against well-funded, merciless enemies who play by entirely different rules,” Paul R. Ehrlich, a Stanford University researcher, said in one of the e-mails.

Some scientists question the tactic and say they should focus instead on perfecting their science, but the researchers who are organizing the effort say the political battle is eroding confidence in their work.

“This was an outpouring of angry frustration on the part of normally very staid scientists who said, ‘God, can’t we have a civil dialogue here and discuss the truth without spinning everything,'” said Stephen H. Schneider, a Stanford professor and senior fellow at the Woods Institute for the Environment who was part of the e-mail discussion but wants the scientists to take a slightly different approach.

The scientists have been under siege since late last year when e-mails leaked from a British climate research institute seemed to show top researchers talking about skewing data to push predetermined outcomes. Meanwhile, the Intergovernmental Panel on Climate Change, the authoritative body on the matter, has suffered defections of members after it had to retract claims that Himalayan glaciers will melt over the next 25 years.

Last month, President Obama announced that he would create a U.S. agency to arbitrate research on climate change.

Sen. James M. Inhofe, Oklahoma Republican and a chief skeptic of global-warming claims, is considering asking the Justice Department to investigate whether climate scientists who receive taxpayer-funded grants falsified data. He lists 17 people he said have been key players in the controversy.

That news has enraged scientists. Mr. Schneider said Mr. Inhofe is showing “McCarthyesque” behavior in the mold of the Cold War-era senator who was accused of stifling political debate through accusations of communism.

In a phone interview, Mr. Schneider, who is one of the key players Mr. Inhofe cites, said he disagrees with trying to engage in an ad battle. He said the scientists will never be able to compete with energy companies.

“They’re not going to win short-term battles playing the game against big-monied interests because they can’t beat them,” he said.

He said the “social contract” between scientists and policymakers is broken and must be reforged, and he urged colleagues to try to recruit members of Congress to take up their case. He also said the press and nongovernmental organizations must be prodded.

“What I am trying to do is head off something that will be truly ugly,” he said. “I don’t want to see a repeat of McCarthyesque behavior and I’m already personally very dismayed by the horrible state of this topic, in which the political debate has almost no resemblance to the scientific debate.”

Not all climate scientists agree with forcing a political fight.

“Sounds like this group wants to step up the warfare, continue to circle the wagons, continue to appeal to their own authority, etc.,” said Judith A. Curry, a climate scientist at the Georgia Institute of Technology. “Surprising, since these strategies haven’t worked well for them at all so far.”

She said scientists should downplay their catastrophic predictions, which she said are premature, and instead shore up and defend their research. She said scientists and institutions that have been pushing for policy changes “need to push the disconnect button for now,” because it will be difficult to take action until public confidence in the science is restored.

“Hinging all of these policies on global climate change with its substantial element of uncertainty is unnecessary and is bad politics, not to mention having created a toxic environment for climate research,” she said.

Ms. Curry also said that more engagement between scientists and the public would help – something that the NAS researchers also proposed.

Paul G. Falkowski, a professor at Rutgers University who started the effort, said in the e-mails that he is seeking a $1,000 donation from as many as 50 scientists to pay for an ad to run in the New York Times. He said in one e-mail that commitments were already arriving.

The e-mail discussion began late last week and continued into this week.

Mr. Falkowski didn’t respond to an e-mail seeking comment, and an effort to reach Mr. Ehrlich was unsuccessful.

But one of those scientists forwarded The Times’ request to the National Academy of Sciences, whose e-mail system the scientists used as their forum to plan their effort.

An NAS spokesman sought to make clear that the organization itself is not involved in the effort.

“These scientists are elected members of the National Academy of Sciences, but the discussants themselves realized their efforts would require private support since the National Academy of Sciences never considered placing such an ad or creating a nonprofit group concerning these issues,” said William Kearney, chief spokesman for NAS.

The e-mails emerged months after another set of e-mails from a leading British climate research group seemed to show scientists shading data to try to bolster their claims, and are likely to feed the impression among skeptics that researchers are pursuing political goals as much as they are disseminating science.

George Woodwell, founder of the Woods Hole Research Center, said in one e-mail that researchers have been ceding too much ground. He blasted Pennsylvania State University for pursuing an academic investigation against professor Michael E. Mann, who wrote many of the e-mails leaked from the British climate research facility.

An initial investigation cleared Mr. Mann of falsifying data but referred one charge, that he “deviated from accepted practices within the academic community,” to a committee for a more complete review.

In his e-mail, Mr. Woodwell acknowledged that he is advocating taking “an outlandishly aggressively partisan approach” but said scientists have had their “classical reasonableness” turned against them.

“We are dealing with an opposition that is not going to yield to facts or appeals from people who hold themselves in high regard and think their assertions and data are obvious truths,” he wrote.

>Oficina de divulgação científica na UFRJ

>
A oficina tem como objetivo incentivar a prática da divulgação científica a partir de discussão sobre o impacto da atividade com experientes divulgadores de ciência.

Sextas-feiras das 12h às 13h no auditório do Programa de Pós-Graduação em Ciências Morfológicas da UFRJ (Ilha do Fundão, ICB, CCS, bloco F). Entrada franca. Organização: Stevens Rehen.

PROGRAMAÇÃO

12/03 – Roberto Lent (ICB-UFRJ): A divulgação científica no Brasil (1980-2010): do Paleolítico ao Neolítico.

26/03 – Claudia Jurberg (Programa de Oncobiologia IBqM-UFRJ): Nos bastidores da notícia, a divulgação científica sob ótica da assessoria de imprensa.

16/04 – Milton Moraes (Fiocruz): Ciência, história em quadrinhos e o jogo do genoma.

30/04 – Suzana Herculano Houzel (ICB-UFRJ): A Neurocientista de plantão.

07/05 – Franklin David Rumjanek (IBqM-UFRJ): Por que divulgar Ciência?

14/05 – Reinaldo Lopes (Folha de São Paulo): Como a ciência vira notícia.

28/05 – Mauro Rebelo (IBCCF-UFRJ): Bioletim e Blogs de ciência.

25/06 – Alysson Muotri (UCSD e colunista do G1): Histórias e estórias da divulgação científica.

>Signs of Damage to Public Trust in Climate Findings (N. Y. Times/Dot Earth blog)

>
By ANDREW C. REVKIN
February 5, 2010, 4:27 pm

CBS News has run a report summarizing fallout from the illegal distribution of climate scientists’ email messages and files and problems with the 2007 report from the Intergovernmental Panel on Climate Change. The conclusion is that missteps and mistakes are creating broader credibility problems for climate science.

Senator James M. Inhofe was quick to add the report to the YouTube channel of the minority on the Environment and Public Works committee:

Ralph J. Cicerone, the president of the National Academy of Sciences, has an editorial in this week’s edition of the journal Science (subscription only) noting the same issue. Over all, he wrote, “My reading of the vast scientific literature on climate change is that our understanding is undiminished by this incident; but it has raised concern about the standards of science and has damaged public trust in what scientists do.”

Dr. Cicerone, an atmospheric scientist, added that polls and input he has received from various sources indicate that “public opinion has moved toward the view that scientists often try to suppress alternative hypotheses and ideas and that scientists will withhold data and try to manipulate some aspects of peer review to prevent dissent. This view reflects the fragile nature of trust between science and society, demonstrating that the perceived misbehavior of even a few scientists can diminish the credibility of science as a whole.” (A BBC report on its latest survey on climate views supports Dr. Cicerone’s impression.)

What should scientists do? Dr. Cicerone acknowledged both the importance of improving transparency and the challenges in doing so:

“It is essential that the scientific community work urgently to make standards for analyzing, reporting, providing access to, and stewardship of research data operational, while also establishing when requests for data amount to harassment or are otherwise unreasonable. A major challenge is that acceptable and optimal standards will vary among scientific disciplines because of proprietary, privacy, national security and cost limitations. Failure to make research data and related information accessible not only impedes science, it also breeds conflicts.”

As recently as last week, senior members of the intergovernmental climate panel had told me that some colleagues did not see the need for changes in practices and were convinced that the recent flareup over errors in the 2007 report was a fleeting inconvenience. I wonder if they still feel that way.

UPDATE: Here’s some additional reading on the I.P.C.C’s travails and possible next steps for the climate panel:

IPCC Flooded by Criticism, by Quirin Schiermeier in Nature News.

Anatomy of I.P.C.C.’s Mistake on Himalayan Glaciers and Year 2035, by Bidisha Banerjee and George Collins in the Yale Forum on Climate Change and the Media.

* * *

After Emergence of Climate Files, an Uncertain Forecast

By ANDREW C. REVKIN
December 1, 2009, 10:56 am

Roger A. Pielke Jr. is a political scientist at the University of Colorado who has long focused on climate and disasters and the interface of climate science and policy. He has been among those seeking some clarity on temperature data compiled by the Climatic Research Unit of the University of East Anglia, which is now at the center of a storm over thousands of e-mail messages and documents either liberated or stolen from its servers (depending on who is describing the episode). [UPDATED 11:45 a.m. with a couple more useful voices “below the fold.”]

On Monday, I asked him, in essence, if the shape of the 20th-century temperature curve were to shift much as a result of some of the issues that have come up in the disclosed e-mail messages and files, would that erode confidence in the keystone climate question (the high confidence expressed by the Intergovernmental Panel on Climate Change in 2007 that most warming since 1950 is driven by human activities)?

This is Dr. Pielke’s answer. (I added boldface to the take-home points.):

Here is my take, in a logical ordering, from the perspective of an informed observer:

The circumstances:

1. There are many adjustments made to the raw data to account for biases and other factors.
2. Some part of the overall warming trend is as a result of these adjustments.
3. There are legitimately different ways to do the adjusting. Consider that in the e-mails, [Phil] Jones writes that he thinks [James] Hansen’s approach to urban effects is no good. There are also debates over how to handle ocean temperatures from buckets versus intake valves on ships and so on. And some of the procedures for adjusting are currently contested in the scientific literature.
4. Presumably once the data is readily available how these legitimate scientific choices are made about the adjusting would be open to scrutiny and debate.
5. People will then be much more able to cherry pick adjustment procedures to maximize or minimize the historical trends, but also to clearly see how others make decisions about adjustments.
6. Mostly this matters for pre-1979, as the R.S.S. and U.A.H. satellite records provide some degree of independent checking.

Now the implications:

A. If it turns out that the choices made by CRU, GISS, NOAA fall on the “maximize historical trends” end of the scale, that will not help their perceived credibility for obvious reasons. On the other hand, if their choices lead to the middle of the range or even low end, then this will enhance their credibility.
B. The surface temps matter because they are a key basis for estimates of climate sensitivity in the models used to make projections. So people will fight over small differences, even if everyone accepts a significant warming trend. (This is a key point for understanding why people will fight over small differences.)
C. When there are legitimate debates over procedures in science (i.e., competing certainties from different scientists), then this will help the rest of us to understand that there are irreducible uncertainties across climate science.
D. In the end, I would hypothesize that the result of the freeing of data and code will necessarily lead to a more robust understanding of scientific uncertainties, which may have the perverse effect of making the future less clear, i.e., because it will result in larger error bars around observed temperature trends which will carry through into the projections.
E. This would have the greatest implications for those who have staked a position on knowing the climate future with certainty — so on both sides, those arguing doom and those arguing, “Don’t worry be happy.”

So, in the end, Dr. Pielke appears to say, closer scrutiny of the surface-temperature data could undermine definitive statements of all kinds — that human-driven warming is an unfolding catastrophe or something concocted. More uncertainty wouldn’t produce a climate comfort zone, given that poorly understood phenomena can sometimes cause big problems. But it would surely make humanity’s energy and climate choices that much tougher.

[UPDATE, 11:45 a.m.] Andrew Freedman at the Capital Weather Gang blog has interviewed Gerald North, the climate scientist who headed the National Academies panel that examined the tree-ring data and “hockey stick” graphs. Some excerpts:

On whether the emails and files undermine Dr. North’s confidence in human-driven climate change:

This hypothesis (Anthropogenic GW) fits in the climate science paradigm that 1) Data can be collected and assembled in ways that are sensible. 2) These data can be used to test and or recalibrate climate simulation models. 3) These same models can be used to predict future and past climates. It is understood that this is a complicated goal to reach with any precision. The models are not yet perfect, but there is no reason to think the approach is wrong.

On Stephen McIntyre of Climateaudit.org:

I do think he has had an overall positive effect. He has made us re-examine the basis for our assertions. In my opinion this sorts itself out in the due course of the scientific process, but perhaps he has made a community of science not used to scrutiny take a second look from time to time. But I am not sure he has ever uncovered anything that has turned out to be significant.

Also, please note below that Michael Schlesinger at the University of Illinois sent in a response to sharp criticisms of his Dot Earth contribution from Roger Pielke, Sr., at the University of Colorado, Boulder. (Apologies for Colorado State affiliation earlier; he’s moved.)

>Idealização e abstração

>
Especiais

Agência FAPESP, 22/12/2009
Por Fabio Reynol

Distinguir o falso e suprimir o verdadeiro é, para a maior parte dos casos, indispensável para se fazer uma boa ciência cognitiva. A declaração provocativa foi feita por John Woods, professor da Universidade da Colúmbia Britânica, no Canadá.

O filósofo participou na semana passada do Seminário “Raciocínio Baseado em Modelo em Ciência e Tecnologia”, na Universidade Estadual de Campinas (Unicamp). O evento foi realizado no âmbito do Projeto Temático Logical Consequence and Combinations of Logics – Fundaments and Efficient Applications apoiado pela FAPESP e coordenado por Walter Carnielli, professor do Instituto de Filosofia e Ciências Humanas da Unicamp.

Woods se refere a dois recursos utilizados pelo raciocínio baseado em modelo voltado à ciência: a idealização e a abstração. Segundo ele, ambos são distorções da realidade que acabam trazendo bons resultados para a investigação científica. “São essas distorções que os tornam interessantes”, disse.

Enquanto a idealização representa em demasia um fenômeno expressando aspectos considerados falsos, a abstração o sub-representa ao eliminar algumas variáveis em favorecimento de outras na tentativa de simplificar o problema. Um exemplo de idealização são problemas de física em que não é considerado o atrito das superfícies.

As ideias de Woods fazem um contraponto à imagem instrumentalista da ciência, de uma ferramenta para registrar a realidade por meio apenas de medições precisas e fiéis. Para ele, os modelos científicos de sucesso contêm distorções. “A distorção não é incompatível com a aquisição do conhecimento”, destacou.

A base estaria no próprio sistema de modelagem, que nunca será idêntico ao fenômeno representado. “Se algo com que um objeto se parece nunca será o próprio objeto, então dizer o que é esse objeto é o mesmo que afirmar o que ele não é”, disse Woods.

O filósofo criou um modelo representativo para explicar como tais modelos conseguem ser bem-sucedidos. Segundo ele, os conhecimentos resultantes desses processos modelados não são obtidos por instrumentos, mas por cognição, e ainda usam uma técnica contraintuitiva ao trabalhar com considerações irreais, idealizadas ou até mesmo falsas.

“Entender as coisas de maneira errada é um meio de entendê-las corretamente. É intrigante mesmo – e dá certo”, afirmou.

Além de Carnielli, coordenaram o seminário o professor Lorenzo Magnani, da Universidade de Pavia, e o professor Claudio Pizzi, da Universidade de Siena. As duas instituições italianas promoveram o evento em conjunto com a Unicamp.

>Aspectos psicológicos na comunicação social das mudanças climáticas

>

The Psychology of Climate Change Communication: A Guide for Scientists, Journalists, Educators, Political Aides, and the Interested Public
Download the guide and learn more at: cred.columbia.edu/guide/home.html
Press release:
HOW DOES THE MIND GRASP CLIMATE CHANGE? A research-based guide tries to narrow the gap between information and action
NEW YORK, Nov. 4, 2009 — A recent poll shows that the number of Americans who accept that human activity is changing Earth’s climate is declining—down from 47 percent to 36 percent—even though the scientific data is overwhelming, and continues to build rapidly. A concise new publication delves into what goes on in the human mind that causes this disconnect, and what communicators of climate science can do about it.
The new 43-page guide, The Psychology of Climate Change Communication, released today by Columbia University’s Center for Research on Environmental Decisions, looks at how people process information and decide to take action, or not. Using research into the reactions of groups as disparate as African farmers and conservative U.S. voters, it offers insights on how scientists, educators, journalists and others can effectively connect with the wider world.
For the nonscientist, climate can seem alternately confusing, overwhelming and politically loaded, say lead authors Debika Shome and Sabine Marx. The guide shows how evolving scientific knowledge can be conveyed without running into predictable roadblocks. Using eight basic principles, it identifies tactics that scientists and other can use to increase the chances that people will understand what they are saying and, when appropriate, take action. These include framing complex issues in ways that people can relate to personally. (New Yorkers may respond more to the idea that sea-level rise threatens to flood their subways, than to the idea that it also threatens much of Bangladesh.) They say scientists and journalists also need to do a better job of sorting the larger picture from smaller uncertainties—for instance, concentrating on the strong consensus that sea levels will rise in the 21st century, versus confusing readers with disagreements over exactly how much levels will rise.
Scientists generally acknowledge that nothing can be known with absolute certainty; their trade involves reducing the amount of uncertainty. But, as with the numbers they give out, the words they habitually use can be misinterpreted by the public to mean they do not really know what they are talking about. For instance, a recent report from the Intergovernmental Panel on Climate Change states that global temperature increases that have taken place in the last 50 years have been “very likely due to the observed increase in anthropogenic GHG concentrations.” Panel scientists have agreed that “very likely” means 90 percent certain or more–but when researchers asked ordinary people to assign a percentage to that specific phrase, most came up with a much lower number. The guide also attacks fancy words like anthropogenic (translation: manmade) and acronyms such as GHG, (shorthand for greenhouse gases). These may alienate even educated people, say the authors. Even many graphs that in the minds of scientists show alarming trends elicit only yawns or incomprehension from almost everyone else.
One chart in the guide lists words with columns showing their meaning as perceived by scientists, and by nonscientists. To scientists, a “theory” is the “physical understanding of how [something] works.” Hence, the theory of evolution, the theory that the earth formed over billions of years—and now, the theory of manmade climate change. But to the public, a theory may be just “a hunch, conjecture or speculation.” (Politicians long ago learned the lesson that language is important: one recent study by the authors and their colleagues finds that conservative Americans find “carbon offsets” more acceptable than a “carbon tax”—even though it might be argued the two are essentially the same. Climate legislation now before Congress has excluded anything labeled a “tax.”)
The public has its own chronic problems. For one thing, there is a phenomenon that social scientists call the “finite pool of worry”; people can deal with only so much bad news at a time before they tune out. For another, when individuals respond to threats like climate change, they are likely to alleviate their worries by taking only one action, even if it is in their interest to take more than one—an effect called the “single actions bias.” For Americans, recycling often serves as a catchall “green” measure, and people will neglect to take others, such as switching to more efficient light sources. One study showed that farmers in Argentina who had capacity to store grain were less likely to use irrigation or crop insurance, even though the added measures would have made their operations more resilient to changing weather.
“Gaining public support for climate change policies and encouraging environmentally responsible behavior depends on a clear understanding of how people process information and make decisions,” say Shome and Marx. “Social science provides an essential part of the puzzle.”
Free printed copies and an interactive online version of the guide are available on CRED’s website . The  project received funding from the Charles Evans Hughes Memorial Foundation and the U.S. National Science Foundation.