Arquivo da tag: Política

Weathering Fights – Science: What’s It Up To? (The Daily Show with Jon Stewart)

http://media.mtvnservices.com/mgid:cms:video:thedailyshow.com:400760

Science claims it’s working to cure disease, save the planet and solve the greatest human mysteries, but Aasif Mandvi finds out what it’s really up to. (05:47) – Comedy Central

Group Urges Research Into Aggressive Efforts to Fight Climate Change (N.Y. Times)

By CORNELIA DEAN, Published: October 4, 2011

With political action on curbing greenhouse gases stalled, a bipartisan panel of scientists, former government officials and national security experts is recommending that the government begin researching a radical fix: directly manipulating the Earth’s climate to lower the temperature.

Members said they hoped that such extreme engineering techniques, which include scattering particles in the air to mimic the cooling effect of volcanoes or stationing orbiting mirrors in space to reflect sunlight, would never be needed. But in itsreport, to be released on Tuesday, the panel said it is time to begin researching and testing such ideas in case “the climate system reaches a ‘tipping point’ and swift remedial action is required.”

The 18-member panel was convened by the Bipartisan Policy Center, a research organization based in Washington founded by four senators — Democrats and Republicans — to offer policy advice to the government. In interviews, some of the panel members said they hoped that the mere discussion of such drastic steps would jolt the public and policy makers into meaningful action in reducing greenhouse gas emissions, which they called the highest priority.

The idea of engineering the planet is “fundamentally shocking,” David Keith, an energy expert at Harvard and the University of Calgary and a member of the panel, said. “It should be shocking.”

In fact, it is an idea that many environmental groups have rejected as misguided and potentially dangerous.

Jane Long, an associate director of the Lawrence Livermore National Laboratory and the panel’s co-chairwoman, said that by spewing greenhouse gases into the atmosphere, human activity was already engaged in climate modification. “We are doing it accidentally, but the Earth doesn’t know that,” she said, adding, “Going forward in ignorance is not an option.”

The panel, the Task Force on Climate Remediation Research, suggests that the White House Office of Science and Technology Policy begin coordinating research and estimates that a valuable effort could begin with a few million dollars in financing over the next few years.

One reason that the United States should embrace such research, the report suggests, is the threat of unilateral action by another country. Members say research is already under way in Britain, Germany and possibly other countries, as well as in the private sector.

“A conversation about this is going to go on with us or without us,” said David Goldston, a panel member who directs government affairs at the Natural Resources Defense Counciland is a former chief of staff of the House Committee on Science. “We have to understand what is at stake.”

In interviews, panelists said again and again that the continuing focus of policy makers and experts should be on reducing emissions of carbon dioxide and other greenhouse gases. But several acknowledged that significant action remained a political nonstarter. Last month, for example, the Obama administration told the federal Environmental Protection Agency to hold off on tightening ozone standards, citing complications related to the weak economy.

According to the United Nations Intergovernmental Panel on Climate Change, greenhouse gas emissions have contributed to raising the global average surface temperatures by about 1.3 degrees Fahrenheit in the past 100 years. It is impossible to predict how much impact the report will have. But given the panelists’ varied political and professional backgrounds, they seem likely to achieve one major goal: starting a broader conversation on the issue. Some climate experts have been working on it for years, but they have largely kept their discussions to themselves, saying they feared giving the impression that there might be quick fixes for climate change.

“Climate adaptation went through the same period of concern,” Mr. Goldston said, referring to the onetime reluctance of some researchers to discuss ways in which people, plants and animals might adjust to climate change. Now, he said, similar reluctance to discuss geoengineering is giving way, at least in part because “it’s possible we may have to do this no matter what.”

Although the techniques, which fall into two broad groups, are more widely known as geoengineering, the panel prefers “climate remediation.”

The first is carbon dioxide removal, in which the gas is absorbed by plants, trapped and stored underground or otherwise removed from the atmosphere. The methods are “generally uncontroversial and don’t introduce new global risks,” said Ken Caldeira, a climate expert at Stanford University and a panel member. “It’s mostly a question of how much do these things cost.”

Controversy arises more with the second group of techniques, solar radiation management, which involves increasing the amount of solar energy that bounces back into space before it can be absorbed by the Earth. They include seeding the atmosphere with reflective particles, launching giant mirrors above the earth or spewing ocean water into the air to form clouds.

These techniques are thought to pose a risk of upsetting earth’s natural rhythms. With them, Dr. Caldeira said, “the real question is what are the unknown unknowns: Are you creating more risk than you are alleviating?”

At the influential blog Climate Progress, Joe Romm, a fellow at the Center for American Progress, has made a similar point, likening geo-engineering to a dangerous course of chemotherapy and radiation to treat a condition curable through diet and exercise — or, in this case, emissions reduction.

The panel rejected any immediate application of climate remediation techniques, saying too little is known about them. In 2009, the Royal Society in Britain said much the same, assessing geoengineering technologies as “technically feasible” but adding that their potential costs, effectiveness and risks were unknown.

Similarly, in a 2010 review of federal research that might be relevant to climate remediation, the federal Government Accountability Office noted that “major uncertainties remain on the efficacy and potential consequences” of the approach. Its report also recommended that the White House Office of Science and Technology Policy “establish a clear strategy for geoengineering research.”

John P. Holdren, who heads that office, declined interview requests. He issued a statement reiterating the Obama administration’s focus on “taking steps to sensibly reduce pollution that is contributing to climate change.”

Yet in an interview with The Associated Press in 2009, Dr. Holdren said the possible risks and benefits of geoengineering should be studied very carefully because “we might get desperate enough to want to use it.”

In a draft plan made public on Friday, the U.S. Global Change Research Program, a coordinating effort administered by his office, outlined its own climate change research agenda, including studies of the impacts of rapid climate change.

The plan said that climate-related projections would be crucial to future studies of the “feasibility, effectiveness and unintended consequences of strategies for deliberate, large-scale manipulations of Earth’s environment,” including carbon dioxide removal and solar radiation management.

Many countries fault the United States for government inaction on climate change, especially given its longtime role as a chief contributor to the problem.

Frank Loy, a panelist and former chief climate negotiator for the United States, suggested that people around the world would see past those issues if the United States embraced geoengineering studies, provided that it was “very clear about what kind of research is undertaken and what the safeguards are.”

This article has been revised to reflect the following correction:

Correction: October 4, 2011

An earlier version of this article mistakenly referred to Frank Loy as the nation’s chief climate negotiator; he is a former chief climate negotiator. It also misstated the name of a federal agency that reported on the potential effectiveness of climate remediation. It is the Government Accountability Office, not the General Accountability Office.

Índios invadem obras de Belo Monte e bloqueiam Transamazônica (FSP)

27/10/2011 – 14h21

AGUIRRE TALENTO
DE BELÉM

O canteiro de obras da hidrelétrica de Belo Monte, localizado no município de Vitória do Xingu (oeste do Pará, a 945 km de Belém), foi invadido na manhã desta quinta-feira (27) em um protesto de indígenas, pescadores e moradores da região.

Eles também bloquearam a rodovia Transamazônica na altura do quilômetro 52, onde fica a entrada do canteiro de obras da usina.

O protesto, que começou às 5h da manhã, foi organizado durante seminário realizado nesta semana, em Altamira (também no oeste, a 900 km de Belém), que discutiu os impactos da instalação de usinas hidrelétricas na região.

Os seguranças permitiram a entrada dos manifestantes sem oferecer resistência, e os funcionários da empresa não apareceram para trabalhar. Com isso, as obras estão paradas.

“Acreditamos que a empresa ficou sabendo de nossa manifestação e não quis entrar em confronto”, afirmou Eden Magalhães, secretário-executivo do Cimi (Conselho Indigenista Missionário), uma das entidades participantes do protesto.

A Polícia Rodoviária Federal confirmou a ocorrência do protesto, mas ainda não sabe estimar a quantidade de pessoas presentes.Segundo ele, há cerca de 600 pessoas no local, entre índios, pescadores, população ribeirinha e até estudantes.

Os manifestantes exigem a presença de algum integrante do governo federal no local e pedem a paralisação das obras.

Ontem, foi adiado mais uma vez o julgamento na Justiça Federal sobre o licenciamento da usina de Belo Monte. O julgamento está empatado com um voto a favor da construção da usina e um voto contra. Falta o voto de desempate, mas ainda não há previsão de quando o processo voltará a ser colocado em pauta.

 

Novo Maracanã já nasce velho (O Globo)

André Trigueiro

O Globo, 27/10/11

O projeto do novo Maracanã confirma a exclusão de um item absolutamente importante para que qualquer projeto de engenharia do gênero possa ser chamado de “moderno e sustentável”. Apesar do variado cardápio de estádios de futebol espalhados pelo mundo com aproveitamento energético do sol, a caríssima obra de reconstrução do Maracanã – quase 1 bilhão de reais – ignorou essa possibilidade.

Estranho que isso tenha acontecido num país onde o sol brilha em média 280 dias por ano. Ainda mais estranho que isso tenha acontecido na cidade que sediou a Rio-92, que vai sediar a Rio+20, e que está situada na mesma faixa de exposição solar que Sidney, na Austrália, que se notabilizou por realizar os primeiros Jogos Verdes da História, inteiramente abastecidos de energia solar.

Cobri como jornalista os Jogos de Sidney em 2000 e lembro-me das imensas estruturas com placas fotovoltaicas que captavam energia solar para iluminar as competições no estádio olímpico, no Superdome e em todas as instalações esportivas. A Vila Olímpica com 665 casas se transformou no maior bairro dotado de energia solar do planeta. O porta-voz do Comitê Olímpico Internacional, o australiano Michael Bland, justificou assim os investimentos em energia solar: “Queremos fazer com que a energia solar se torne popular em todos os países. É ridículo que, na Austrália, todas as casas não usem um captador de energia solar. Temos os telhados, temos o sol, e os desperdiçamos. É um jeito estúpido de levar a vida”.

Que estupidez a nossa desperdiçar a imensa área das marquises do novo Maracanã – quase 29 mil metros quadrados – que poderiam abrigar um vistoso conjunto de placas fotovoltaicas capazes de gerar energia elétrica para até 3.000 domicílios. O custo varia de dez a vinte milhões de reais, dependendo da tecnologia empregada. Alguém poderá dizer: “É caro demais! Não vale a pena”. Mas será que a forma usual de comprar energia está valendo a pena?

Vivemos num país onde, segundo o IBGE, a tarifa de energia elétrica subiu mais do que o dobro da inflação oficial nos últimos 15 anos. A opção pelo solar – embora mais cara – oferece como vantagem a amortização do investimento em alguns poucos anos.

Alguém poderá dizer que a nova marquise – mais leve – poderia não suportar as tradicionais placas fotovoltaicas. Pois que se pensasse numa estrutura compatível. O que está em jogo é a possibilidade de tornar o estádio útil mesmo em dias que não aconteçam partidas de futebol. O Maracanã poderia ser uma usina de energia – ainda que com potência modesta – que além do benefício direto de gerar eletricidade, funcionaria também como elemento indutor de mais pesquisas e investimentos em energia solar no Brasil.

E quem disse que o custo de instalação de um projeto como esse só seria possível com recursos públicos? Se houvesse vontade política para promover inovação tecnológica no setor energético usando o novo Maracanã como garoto-propaganda, seria perfeitamente possível sondar o interesse de grandes empresas com know-how em energia solar que aceitassem instalar os equipamentos fotovoltaicos a custo zero, sem ônus para o governo. E o que essa empresa ganharia em troca? O direito de explorar a imagem do Maracanã como “estádio solar” graças à tecnologia oferecida pela empresa.

Alguém duvida que a imagem aérea do estádio tanto na Copa de 2014 quanto nas Olimpíadas de 2016 alcançará bilhões de telespectadores pelo mundo? É mídia espontânea, super-exposição positiva de imagem, e tudo aquilo que um bom negociador não levaria mais do que alguns minutos para convencer o investidor a botar a mão no bolso e bancar a ideia.

Com recursos públicos ou privados, o certo era fazer. Não basta instalar alguns coletores solares para aquecer a água do banho usadas pelos atletas nos vestiários. É pouco. Se os responsáveis pelo projeto do Maracanã marcaram um gol contra desprezando o sol, os estádios de Pituaçu, em Salvador, e Mineirão, em Belo Horizonte, terão a energia solar como aliada para a produção de energia elétrica. Acorda Rio! Maracanã sem energia solar é como o Rio sem praia. Infelizmente os cariocas continuarão usando o sol apenas para se bronzear.Símbolo da sustentabilidade por suas belezas naturais e por sediar grande conferências ambientais da ONU, o Rio de Janeiro continua com um Maracanã aquém do que merece.

 

 

Questioning Privacy Protections in Research (New York Times)

Dr. John Cutler, center, during the Tuskegee syphilis experiment. Abuses in that study led to ethics rules for researchers. Coto Report

By PATRICIA COHEN
Published: October 23, 2011

Hoping to protect privacy in an age when a fingernail clipping can reveal a person’s identity, federal officials are planning to overhaul the rules that regulate research involving human subjects. But critics outside the biomedical arena warn that the proposed revisions may unintentionally create a more serious problem: sealing off vast collections of publicly available information from inspection, including census data, market research, oral histories and labor statistics.

Organizations that represent tens of thousands of scholars in the humanities and social sciences are scrambling to register their concerns before the Wednesday deadline for public comment on the proposals.

The rules were initially created in the 1970s after shocking revelations that poor African-American men infected with syphilis in Tuskegee, Ala., were left untreated by the United States Public Health Service so that doctors could study the course of the disease. Now every institution that receives money from any one of 18 federal agencies must create an ethics panel, called an institutional review board, or I.R.B.

More than 5,875 boards have to sign off on research involving human participants to ensure that subjects are fully informed, that their physical and emotional health is protected, and that their privacy is respected. Although only projects with federal financing are covered by what is known as the Common Rule, many institutions routinely subject all research with a human factor to review.

The changes in the ethical guidelines — the first comprehensive revisions in more than 30 years — were prompted by a surge of health-related research and technological advances.

Researchers in the humanities and social sciences are pleased that the reforms would address repeated complaints that medically oriented regulations have choked off research in their fields with irrelevant and cumbersome requirements. But they were dismayed to discover that the desire to protect individuals’ privacy in the genomics age resulted in rules that they say could also restrict access to basic data, like public-opinion polls.

Jerry Menikoff, director of the federal Office for Human Research Protections, which oversees the Common Rule, cautions that any alarm is premature, saying that federal officials do not intend to pose tougher restrictions on information that is already public. “If the technical rules end up doing that, we’ll try to come up with a result that’s appropriate,” he said.

Critics welcomed the assurance but remained skeptical. Zachary Schrag, a historian at George Mason University who wrote a book about the review process, said, “For decades, scholars in the social sciences and humanities have suffered because of rules that were well intended but poorly considered and drafted and whose unintended consequences restricted research.”

The American Historical Association, with 15,000 members, and the Oral History Association, with 900 members, warn that under the proposed revisions, for example, new revelations that Public Health Service doctors deliberately infected Guatemalan prisoners, soldiers and mental patients with syphilis in the 1940s might never have come to light. The abuses were uncovered by a historian who by chance came across notes in the archives of the University of Pittsburgh. That kind of undirected research could be forbidden under guidelines designed to prevent “data collected for one purpose” from being “used for a new purpose to which the subjects never consented,” said Linda Shopes, who helped draft the historians’ statement.

The suggested changes, she said, “really threaten access to information in a democratic society.”

Numerous organizations including the Consortium of Social Science Associations, which represents dozens of colleges, universities and research centers, expressed particular concern that the new standards might be modeled on federal privacy rules relating to health insurance and restrict use of the broadest of identifying information, like a person’s ZIP code, county or city.

The 11,000-member American Anthropological Association declared in a statement that any process that is based on the health insurance act’s privacy protections “would be disastrous for social and humanities research.” The 45,000-member American Association of University Professors warned that such restrictions “threaten mayhem” and “render impossible a great deal of social-science research, ranging from ethnographic community studies to demographic analysis that relies on census tracts to traffic models based on ZIP code to political polls that report by precinct.”

Dr. Menikoff said references to the statutes governing health insurance information were meant to serve as a starting point, not a blueprint. “Nothing is ruled out,” he said, though he wondered how the review system could be severed from the issue of privacy protection, as the consortium has discussed, “if the major risk for most of these studies is that you’re going to disclose information inadvertently.” If there is confidential information on a laptop, he said, requiring a password may be a reasonable requirement.

Ms. Shopes, Mr. Schrag and other critics emphasized that despite their worries they were happy with the broader effort to fix some longstanding problems with institutional review boards that held, say, an undergraduate interviewing Grandma for an oral history project to the same guidelines as a doctor doing experimental research on cancer patients.

“The system has been sliding into chaos in recent years,” said Alice Kessler-Harris, president of the 9,000-member Organization of American Historians. “No one can even agree on what is supposed to be covered in the humanities and social sciences.”

Vague rules designed to give the thousands of review boards flexibility when dealing with nonmedical subjects have instead resulted in higgledy-piggledy enforcement and layers of red tape even when no one is at risk, she said.

For example Columbia University, where Ms. Kessler-Harris teaches, exempts oral history projects from review, while boards at the University of Illinois in Urbana-Champaign and the University of California, San Diego, have raised lengthy objections to similar interview projects proposed by undergraduate and master’s students, according to professors there.

Brown University has been sued by an associate professor of education who said the institutional review board overstepped its powers by barring her from using three years’ worth of research on how the parents of Chinese-American children made use of educational testing.

Ms. Shopes said board members at one university had suggested at one point that even using recorded interviews deposited at the Ronald Reagan Presidential Foundation and Library would have needed Reagan’s specific approval when he was alive.

Many nonmedical researchers praised the idea that scholars in fields like history, literature, journalism, languages and classics who use traditional methods of research should not have to submit to board review. They would like the office of human protections to go further and lift restrictions on research that may cause participants embarrassment or emotional distress. “Our job is to hold people accountable,” Ms. Kessler-Harris said.

Dr. Menikoff said, “We want to hear all these comments.” But he maintained that when the final language is published, critics may find themselves saying, “Wow, this is reasonable stuff.”

 

This article has been revised to reflect the following correction:

Correction: October 26, 2011

An article on Monday about federal officials’ plans to overhaul privacy rules that regulate research involving human subjects, and concerns raised by scholars, paraphrased incorrectly from comments by Linda Shopes, who helped draft a statement by historians about possible changes. She said that board members at a university (which she did not name) — not board members at the University of Chicago — suggested at one point that using recorded interviews deposited at the Ronald Reagan Presidential Foundation and Library would have needed Reagan’s specific approval when he was alive.

More anthropologists on Wall Street please (The Economist)

Education policy

Oct 24th 2011, 20:58 by M.S.

APPARENTLY Rick Scott, the governor of Florida, called two weeks ago for reducing funding for liberal-arts disciplines at state universities and shifting the money to science, technology, engineering and math, which he abbreviates to STEM. (Amusingly, if you Google “Rick Scott STEM” you end up getting multiple references to Mr Scott’s apparently non-operative campaign pledge to ban stem-cell research in Florida. Between the two issues, you’ve got a sort of operatic treatment of the modern Republican love-hate relationship with science.) Mr Scott seems to have repeatedly singled out the discipline of anthropology for derision. On one occasion, he apparently told a right-wing radio host: “You know, we don’t need a lot more anthropologists in the state. It’s a great degree if people want to get it, but we don’t need them here. I want to spend our dollars giving people science, technology, engineering, math degrees…so when they get out of school, they can get a job.” On another occasion, he’s quoted as telling a business group in Tallahassee: “Do you want to use your tax dollars to educate more people who can’t get jobs in anthropology? I don’t.”

Few would defend deliberately educating more people who can’t get jobs in anthropology, as such. (Of course, giving people math degrees rather than anthropology degrees will render them even less able to get jobs in anthropology.) Many, however, would defend educating more people in anthropology, regardless of what they wind up getting jobs in. In Slate on Friday, Michael Crow, president of Arizona State University, gave the traditional and entirely accurate pitch:

[R]esolving the complex challenges that confront our nation and the world requires more than expertise in science and technology. We must also educate individuals capable of meaningful civic participation, creative expression, and communicating insights across borders. The potential for graduates in any field to achieve professional success and to contribute significantly to our economy depends on an education that entails more than calculus.

Curricula expressly tailored in response to the demands of the workforce must be balanced with opportunities for students to develop their capacity for critical thinking, analytical reasoning, creativity, and leadership—all of which we learn from the full spectrum of disciplines associated with a liberal arts education. Taken together with the rigorous training provided in the STEM fields, the opportunities for exploration and learning that Gov. Scott is intent on marginalizing are those that have defined our national approach to higher education.

This is a solid response. What it lacks are rhetorical oomph and concrete examples. So here’s a concrete example with a little oomph. Some of the best analysis of the 2007-2008 financial crisis, and of the ongoing follies on Wall Street these days, has been produced by the Financial Times‘ Gillian Tett. Ms Tett began warning that collateralised debt obligations and credit-default swaps were likely to lead to a major financial implosion in 2005 or so. The people who devise such complex derivatives are generally trained in physics or math. Ms Tett has a PhD in anthropology. Here’s a 2008 profile of Ms Tett by the Guardian’s Laurie Barton.

Tett began looking at the subject of credit five years ago. “Everyone was looking at the City and talking about M&A [mergers and acquisitions] and equity markets, and all the traditional high-glamour, high-status parts of the City. I got into this corner of the market because I passionately believed there was a revolution happening that had been almost entirely ignored. And I got really excited about trying to actually illustrate what was happening.”

Not that anyone particularly wanted to listen. “You could see everyone’s eyes glazing over … But my team, not just me, we very much warned of the dangers. Though I don’t think we expected the full scale of the disaster that’s unfolded.”

There is something exceedingly calm and thorough about Tett. She talks with the patient enthusiasm of a Tomorrow’s World presenter—a throwback, perhaps, to her days studying social anthropology, in which she has a PhD from Cambridge. “I happen to think anthropology is a brilliant background for looking at finance,” she reasons. “Firstly, you’re trained to look at how societies or cultures operate holistically, so you look at how all the bits move together. And most people in the City don’t do that. They are so specialised, so busy, that they just look at their own little silos. And one of the reasons we got into the mess we are in is because they were all so busy looking at their own little bit that they totally failed to understand how it interacted with the rest of society.

“But the other thing is, if you come from an anthropology background, you also try and put finance in a cultural context. Bankers like to imagine that money and the profit motive is as universal as gravity. They think it’s basically a given and they think it’s completely apersonal. And it’s not. What they do in finance is all about culture and interaction.”

Another person with an anthropology degree who’s been doing terrific work in recent years in a somewhat-related field is the Dutch journalist Joris Luyendijk, who produced a fantastic short book last year analysing the tribal culture of the Dutch parliament and the media circles that cover it. He’s currently working on a study of the City as well. Anyway, the general point is that while studying human behaviour through complex derivatives has its uses, there’s something to be said for the more rigorous and less egocentric analytical tools that anthropology brings to play, and it might be worth Mr Scott’s time to take a course or two. It’s never too late to learn.

The scientific finding that settles the climate-change debate (Washington Post)

By Eugene Robinson, Published: October 24

For the clueless or cynical diehards who deny global warming, it’s getting awfully cold out there.

The latest icy blast of reality comes from an eminent scientist whom the climate-change skeptics once lauded as one of their own. Richard Muller, a respected physicist at the University of California, Berkeley, used to dismiss alarmist climate research as being “polluted by political and activist frenzy.” Frustrated at what he considered shoddy science, Muller launched his own comprehensive study to set the record straight. Instead, the record set him straight.

“Global warming is real,” Muller wrote last week in The Wall Street Journal.

Rick Perry, Herman Cain, Michele Bachmann and the rest of the neo-Luddites who are turning the GOP into the anti-science party should pay attention.

“When we began our study, we felt that skeptics had raised legitimate issues, and we didn’t know what we’d find,” Muller wrote. “Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been careful in their work, despite their inability to convince some skeptics of that.”

In other words, the deniers’ claims about the alleged sloppiness or fraudulence of climate science are wrong. Muller’s team, the Berkeley Earth Surface Temperature project, rigorously explored the specific objections raised by skeptics — and found them groundless.

Muller and his fellow researchers examined an enormous data set of observed temperatures from monitoring stations around the world and concluded that the average land temperature has risen 1 degree Celsius — or about 1.8 degrees Fahrenheit — since the mid-1950s.

This agrees with the increase estimated by the United Nations-sponsored Intergovernmental Panel on Climate Change. Muller’s figures also conform with the estimates of those British and American researchers whose catty e-mails were the basis for the alleged “Climategate” scandal, which was never a scandal in the first place.

The Berkeley group’s research even confirms the infamous “hockey stick” graph — showing a sharp recent temperature rise — that Muller once snarkily called “the poster child of the global warming community.” Muller’s new graph isn’t just similar, it’s identical.

Muller found that skeptics are wrong when they claim that a “heat island” effect from urbanization is skewing average temperature readings; monitoring instruments in rural areas show rapid warming, too. He found that skeptics are wrong to base their arguments on the fact that records from some sites seem to indicate a cooling trend, since records from at least twice as many sites clearly indicate warming. And he found that skeptics are wrong to accuse climate scientists of cherry-picking the data, since the readings that are often omitted — because they are judged unreliable — show the same warming trend.

Muller and his colleagues examined five times as many temperature readings as did other researchers — a total of 1.6 billion records — and now have put that merged database online. The results have not yet been subjected to peer review, so technically they are still preliminary. But Muller’s plain-spoken admonition that “you should not be a skeptic, at least not any longer” has reduced many deniers to incoherent grumbling or stunned silence.

Not so, I predict, with the blowhards such as Perry, Cain and Bachmann, who, out of ignorance or perceived self-interest, are willing to play politics with the Earth’s future. They may concede that warming is taking place, but they call it a natural phenomenon and deny that human activity is the cause.

It is true that Muller made no attempt to ascertain “how much of the warming is due to humans.” Still, the Berkeley group’s work should help lead all but the dimmest policymakers to the overwhelmingly probable answer.

We know that the rise in temperatures over the past five decades is abrupt and very large. We know it is consistent with models developed by other climate researchers that posit greenhouse gas emissions — the burning of fossil fuels by humans — as the cause. And now we know, thanks to Muller, that those other scientists have been both careful and honorable in their work.

Nobody’s fudging the numbers. Nobody’s manipulating data to win research grants, as Perry claims, or making an undue fuss over a “naturally occurring” warm-up, as Bachmann alleges. Contrary to what Cain says, the science is real.

It is the know-nothing politicians — not scientists — who are committing an unforgivable fraud.

Comitês de Bacias vão apresentar moção contra reforma do Código Florestal (Ascom da ANA)

JC e-mail 4372, de 26 de Outubro de 2011.

Reunidos em São Luis (MA) no 13º Encontro Nacional de Comitês de Bacias Hidrográficas, representantes de comitês de todo o Brasil vão apresentar na sexta-feira (28) manifestação contra a redução das Áreas de Proteção Ambiental.

Representantes de Comitês de Bacias Hidrográficas de várias regiões do País preparam moção contra a redução das áreas de proteção ambiental às margens dos rios, em protesto contra o texto da reforma do Código Florestal, aprovado na Câmara dos Deputados em maio, que permite o uso das áreas de preservação permanente (APPs). O texto tramita agora no Senado e deve ir a plenário até o final do ano.

A moção será apresentada na sexta-feira (28), último dia do 13º Encontro Nacional de Comitês de Bacias Hidrográficas (Encob), que começa hoje em São Luis (MA).

Atualmente, o Brasil possui cerca de 180 Comitês, sendo dez em rios federias, com representações de diferentes segmentos da sociedade, espalhados por várias bacias. Ao todo, são mais de 50 mil pessoas engajadas na defesa dos recursos hídricos. Esses comitês funcionam como parlamentos da água, pois são formados por usuários locais dos recursos hídricos; organizações não governamentais; sociedade civil e representes do poder público nos três níveis (municipal, estadual e federal), que se reúnem em sessões plenárias.

A Agência Nacional de Águas (ANA) dá apoio técnico aos comitês federias e os órgãos gestores locais, aos estaduais, conforme determina a Lei 9.433 de 1997, conhecida como Lei das Águas, que estabeleceu a Política Nacional de Recursos Hídricos (PNRH) e criou o Sistema Nacional de Gerenciamento de Recursos Hídricos (Singreh). Todos os anos, representantes de comitês de bacia se reúnem para fazer um balanço da gestão dos recursos hídricos, da atuação desses arranjos locais e debater os desafios da implementação da PNRH. Este ano, porém, a reforma do Código Florestal dominou a cerimônia de abertura do 13º Encob, na noite de ontem (25), em São Luís.

“O Encob é o maior encontro nacional de água do planeta, portanto, reúne a visão de vários segmentos da sociedade, de usuários a pesquisadores, gestores e sociedade civil”, disse o diretor-presidente da ANA, Vicente Andreu. “É fundamental que haja uma forte sinalização ao Congresso. O tempo é curto e precisamos fazer chegar aos senadores uma posição muito firme”, completou. Em abril, a ANA divulgou uma Nota Técnica que explica as razões pelas quais a Agência defende a manutenção da cobertura florestal em torno dos rios na proporção atual estabelecida pelo Código Florestal, ou seja, no mínimo 30 metros. O projeto de lei propõe reduzir as áreas de proteção mínima para 15 metros. As matas ciliares são fundamentais para proteger os rios e garantir a qualidade das águas.

O deputado federal Sarney Filho (PV-MA) prometeu levar as análises do Encob à Subcomissão da Rio+20 da Câmara dos Deputados. “Todos sabemos que nossos rios estão ameaçados pelo lançamento de esgotos, pelo desmatamento das matas ciliares e agora pela reforma do Código Florestal”, disse.

Para o presidente da Rede de Organismos de Bacia (Rebob) e coordenador geral do Fórum Nacional dos Comitês de Bacias Hidrográficas, Lupércio Ziroldo Antônio, “aos olhos do mundo o Brasil é considerado uma potência hídrica por possui 13% da água do planeta e alguns dos maiores aqüíferos do mundo, por isso, precisa dar exemplo, principalmente nos próximos meses, quando haverá dois encontros internacionais importantes sobre meio ambiente e recursos hídricos: o Fórum Mundial da Água, em março de 2012,em Marselha, na França; e a Rio+20, em junho de 2012”.

Vários dos temas que estão sendo debatidos no Encob esta semana poderão ser abordados na Rio+20. Entre as proposições da ANA para o encontro no Rio estão a criação de um fundo para pagamentos por serviços ambientais para a proteção de nascentes, no moldes do Programa Produtor de Água da ANA; a criação de um programa global de pagamento para o tratamento de esgoto, baseado no Prodes (Programa de Despoluição de Bacias Hidrográficas) da ANA; e a criação de um órgão de governança global da água, no âmbito das Nações Unidas.

A programação do Encob inclui cursos de gestão de recursos hídricos para membros dos comitês de bacia e órgãos gestores locais de recursos hídricos, reuniões de comitês interestaduais, reunião da seção Brasil do Conselho Mundial da Água, oficina de adaptação às Mudanças Climáticas na Gestão dos Recursos Hídricos, além de mesas de debates sobre nascentes de centros urbanos, o papel dos comitês na universalização do saneamento, entre outras discussões.

A skeptical physicist ends up confirming climate data (Washington Post)

Posted by Brad Plumer at 04:18 PM ET, 10/20/2011
Back in 2010, Richard Muller, a Berkeley physicist and self-proclaimed climate skeptic, decided to launch the Berkeley Earth Surface Temperature (BEST) project to review the temperature data that underpinned global-warming claims. Remember, this was not long after the Climategate affair had erupted, at a time when skeptics were griping that climatologists had based their claims on faulty temperature data.(Jonathan Hayward/AP)Muller’s stated aims were simple. He and his team would scour and re-analyze the climate data, putting all their calculations and methods online. Skeptics cheered the effort. “I’m prepared to accept whatever result they produce, even if it proves my premise wrong,” wrote Anthony Watts, a blogger who has criticized the quality of the weather stations in the United Statse that provide temperature data. The Charles G. Koch Foundation even gave Muller’s project $150,000 — and the Koch brothers, recall, are hardly fans of mainstream climate science.So what are the end results? Muller’s team appears to have confirmed the basic tenets of climate science. Back in March, Muller told the House Science and Technology Committee that, contrary to what he expected, the existing temperature data was “excellent.” He went on: “We see a global warming trend that is very similar to that previously reported by the other groups.” And, today, the BEST team has released a flurry of new papers that confirm that the planet is getting hotter. As the team’s two-page summary flatly concludes, “Global warming is real.”Here’s a chart comparing their findings with existing data:

The BEST team tried to take a number of skeptic claims seriously, to see if they panned out. Take, for instance, their paper on the “urban heat island effect.” Watts has long argued that many weather stations collecting temperature data could be biased by being located in cities. Since cities are naturally warmer than rural areas (because building materials retain more heat), the uptick in recorded temperatures might be exaggerated, an illusion spawned by increased urbanization. So Muller’s team decided to compare overall temperature trends with only those weather stations based in rural areas. And, as it turns out the trends match up well. “Urban warming does not unduly bias estimates of recent global temperature change,” Muller’s group concluded.

That shouldn’t be so jaw-dropping. Previous analyses — like this one from the National Oceanic and Atmospheric Administration — have responded to Watts’ concerns by showing that a few flawed stations don’t warp the overall trend. But maybe Muller’s team can finally put this controversy to rest, right? Well, not yet. As Watts responds over at his site, the BEST papers still haven’t been peer-reviewed (an important caveat, to be sure). And Watts isn’t pleased with how much pre-publication hype the studies are getting. But so far, what we have is a prominent skeptic casting a critical eye at the data and finding, much to his own surprise, that the data holds up.

Estudo americano confirma aquecimento da superfície terrestre (BBC)

Richard Black

Da BBC News

Estação meteorológica próxima de aeroporto.Grupo afirma que estações meteorológicas dão dados precisos sobre aquecimento

Uma nova análise de um grupo de cientistas dos Estados Unidos concluiu que a superfície da Terra está ficando mais quente.

Desde 1950, a temperatura média em terra aumentou em um grau centígrado, segundo as descobertas do grupo Berkeley Earth Project.

O Berkeley Earth Project usou novos métodos e novos dados, mas as descobertas do grupo seguem a mesma tendência climática vista pela Nasa e pelo Escritório de Meteorologia da Grã-Bretanha, por exemplo.

“Nossa maior surpresa foi que os novos resultados concordam com os valores de aquecimento publicados anteriormente por outras equipes nos Estados Unidos e Grã-Bretanha”, afirmou o professor Richard Muller, que estabeleceu o Berkeley Earth Project na Universidade da Califórnia reunindo dez cientistas renomados.

“Isto confirma que estes estudos foram feitos cuidadosamente e que o potencial de (estudos) tendenciosos, identificados pelos céticos em relação ao aquecimento global, não afetam seriamente as conclusões”, acrescentou.

O grupo de cientistas também relata que, apesar de o efeito de aumento de calor perto de cidades – o chamado efeito de ilha de calor urbana – ser real e já ter sido estabelecido, ele não é o responsável pelo aquecimento registrado pela maioria das estações climáticas no mundo todo.

Ceticismo

O grupo examinou as alegações de blogueiros “céticos” em relação ao fenômeno, que afirmam que os dados de estações meteorológicas não mostram uma tendência verdadeira de aquecimento global.

Eles dizem que muitas estações meteorológicas registraram aquecimento pois estão localizadas perto de cidades e as cidades crescem, aumentando o calor.

No entanto, o grupo de cientistas descobriu cerca de 40 mil estações meteorológicas no mundo todo cujas informações foram gravadas e armazenadas no formato digital.

Os pesquisadores então desenvolveram uma nova forma de analisar os dados para detectar a tendência das temperaturas globais em terra desde 1800.

O resultado foi um gráfico muito parecido com aqueles produzidos pelos grupos mais importantes do mundo, que tiveram seus trabalhos criticados pelos céticos.

Dois destes três registros são mantidos pelos Estados Unidos, na Administração Oceânica e Atmosférica Nacional (NOAA) e na Nasa. O terceiro é uma colaboração entre o Escritório de Meteorologia da Grã-Bretanha e o Centro de Pesquisa Climática da Universidade de East Anglia (UEA).

O professor Phil Jones, do Centro de Pesquisa Climática da UEA, encarou o trabalho do grupo com cautela e afirmou que espera ler “o relatório final”, quando for publicado.

“Estas descobertas iniciais são muito encorajadoras e ecoam nossos resultados e nossa conclusão de que o impacto das ilhas urbanas de calor na média global de temperatura é mínimo”, disse.

Trânsito e fumaça em rua da China (Reuters)Céticos dizem que proximidade de cidades alteram dados de estações

Phil Jones foi um dos cientistas britânicos acusados de manipular dados para exagerar a influência humana no aquecimento global. Os cientistas foram inocentados em 2010.

O caso teve início em 2009, com o vazamento de e-mails de Jones nos quais o cientista parecia sugerir que alguns dados de pesquisas sobre o aquecimento global fossem excluídos de apresentações que seriam realizadas na conferência da ONU sobre mudanças climáticas.

O episódio deu munição aos céticos em relação ao papel dos seres humanos nas alterações climáticas. Mas a sindicância da Universidade de East Anglia concluiu que não havia dúvidas sobre o rigor e a honestidade dos cientistas.

Sem publicação

Bob Ward, diretor de política e comunicações para o Instituto Graham de Mudança Climática e Meio Ambiente, de Londres, afirmou que o aquecimento global é claro.

“Os chamados céticos devem deixar de lado sua alegações de que o aumento na temperatura média global pode ser atribuído ao impacto do crescimento das cidades”, disse.

A equipe do Berkeley Earth Project decidiu divulgar os dados de suas pesquisas inicialmente em seu próprio website, ao invés de fazê-lo em uma publicação especializada.

Os pesquisadores estão pedindo para que os internautas comentem e forneçam suas opiniões antes de preparar os manuscritos para a publicação científica formal.

Richard Muller, que criou o grupo de pesquisa, afirmou que esta livre circulação de informações marca uma volta à forma como a ciência precisa ser feita, ao invés de apenas publicar o estudo em revistas científicas.

Revealed – the capitalist network that runs the world (New Scientist)

19 October 2011 by Andy Coghlan and Debora MacKenzie

The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue (Image: PLoS One)

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters’ worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).

“Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”

Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.

The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.

The work, to be published in PloS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.

When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

John Driffill of the University of London, a macroeconomics expert, says the value of the analysis is not just to see if a small number of people controls the global economy, but rather its insights into economic stability.

Concentration of power is not good or bad in itself, says the Zurich team, but the core’s tight interconnections could be. As the world learned in 2008, such networks are unstable. “If one [company] suffers distress,” says Glattfelder, “this propagates.”

“It’s disconcerting to see how connected things really are,” agrees George Sugihara of the Scripps Institution of Oceanography in La Jolla, California, a complex systems expert who has advised Deutsche Bank.

Yaneer Bar-Yam, head of the New England Complex Systems Institute (NECSI), warns that the analysis assumes ownership equates to control, which is not always true. Most company shares are held by fund managers who may or may not control what the companies they part-own actually do. The impact of this on the system’s behaviour, he says, requires more analysis.

Crucially, by identifying the architecture of global economic power, the analysis could help make it more stable. By finding the vulnerable aspects of the system, economists can suggest measures to prevent future collapses spreading through the entire economy. Glattfelder says we may need global anti-trust rules, which now exist only at national level, to limit over-connection among TNCs. Bar-Yam says the analysis suggests one possible solution: firms should be taxed for excess interconnectivity to discourage this risk.

One thing won’t chime with some of the protesters’ claims: the super-entity is unlikely to be the intentional result of a conspiracy to rule the world. “Such structures are common in nature,” says Sugihara.

Newcomers to any network connect preferentially to highly connected members. TNCs buy shares in each other for business reasons, not for world domination. If connectedness clusters, so does wealth, says Dan Braha of NECSI: in similar models, money flows towards the most highly connected members. The Zurich study, says Sugihara, “is strong evidence that simple rules governing TNCs give rise spontaneously to highly connected groups”. Or as Braha puts it: “The Occupy Wall Street claim that 1 per cent of people have most of the wealth reflects a logical phase of the self-organising economy.”

So, the super-entity may not result from conspiracy. The real question, says the Zurich team, is whether it can exert concerted political power. Driffill feels 147 is too many to sustain collusion. Braha suspects they will compete in the market but act together on common interests. Resisting changes to the network structure may be one such common interest.

The top 50 of the 147 superconnected companies

1. Barclays plc
2. Capital Group Companies Inc
3. FMR Corporation
4. AXA
5. State Street Corporation
6. JP Morgan Chase & Co
7. Legal & General Group plc
8. Vanguard Group Inc
9. UBS AG
10. Merrill Lynch & Co Inc
11. Wellington Management Co LLP
12. Deutsche Bank AG
13. Franklin Resources Inc
14. Credit Suisse Group
15. Walton Enterprises LLC
16. Bank of New York Mellon Corp
17. Natixis
18. Goldman Sachs Group Inc
19. T Rowe Price Group Inc
20. Legg Mason Inc
21. Morgan Stanley
22. Mitsubishi UFJ Financial Group Inc
23. Northern Trust Corporation
24. Société Générale
25. Bank of America Corporation
26. Lloyds TSB Group plc
27. Invesco plc
28. Allianz SE 29. TIAA
30. Old Mutual Public Limited Company
31. Aviva plc
32. Schroders plc
33. Dodge & Cox
34. Lehman Brothers Holdings Inc*
35. Sun Life Financial Inc
36. Standard Life plc
37. CNCE
38. Nomura Holdings Inc
39. The Depository Trust Company
40. Massachusetts Mutual Life Insurance
41. ING Groep NV
42. Brandes Investment Partners LP
43. Unicredito Italiano SPA
44. Deposit Insurance Corporation of Japan
45. Vereniging Aegon
46. BNP Paribas
47. Affiliated Managers Group Inc
48. Resona Holdings Inc
49. Capital Group International Inc
50. China Petrochemical Group Company

* Lehman still existed in the 2007 dataset used

Graphic: The 1318 transnational corporations that form the core of the economy

(Data: PLoS One)  

Economics has met the enemy, and it is economics (Globe and Mail)

Adam Smith is considered the founding father of modern economics. - Adam Smith is considered the founding father of modern economics.

Adam Smith is considered the founding father of modern economics.

ira basen

From Saturday’s Globe and Mail
Published Saturday, Oct. 15, 2011 6:00AM EDT
Last updated Tuesday, Oct. 18, 2011 8:41AM EDT

After Thomas Sargent learned on Monday morning that he and colleague Christopher Sims had been awarded the Nobel Prize in Economics for 2011, the 68-year-old New York University professor struck an aw-shucks tone with an interviewer from the official Nobel website: “We’re just bookish types that look at numbers and try to figure out what’s going on.”

But no one who’d followed Prof. Sargent’s long, distinguished career would have been fooled by his attempt at modesty. He’d won for his part in developing one of economists’ main models of cause and effect: How can we expect people to respond to changes in prices, for example, or interest rates? According to the laureates’ theories, they’ll do whatever’s most beneficial to them, and they’ll do it every time. They don’t need governments to instruct them; they figure it out for themselves. Economists call this the “rational expectations” model. And it’s not just an abstraction: Bankers and policy-makers apply these formulae in the real world, so bad models lead to bad policy.

Which is perhaps why, by the end of that interview on Monday, Prof. Sargent was adopting a more realistic tone: “We experiment with our models,” he explained, “before we wreck the world.”

Rational-expectations theory and its corollary, the efficient-market hypothesis, have been central to mainstream economics for more than 40 years. And while they may not have “wrecked the world,” some critics argue these models have blinded economists to reality: Certain the universe was unfolding as it should, they failed both to anticipate the financial crisis of 2008 and to chart an effective path to recovery.

The economic crisis has produced a crisis in the study of economics – a growing realization that if the field is going to offer meaningful solutions, greater attention must be paid to what is happening in university lecture halls and seminar rooms.

While the protesters occupying Wall Street are not carrying signs denouncing rational-expectations and efficient-market modelling, perhaps they should be.

They wouldn’t be the first young dissenters to call economics to account. In June of 2000, a small group of elite graduate students at some of France’s most prestigious universities declared war on the economic establishment. This was an unlikely group of student radicals, whose degrees could be expected to lead them to lucrative careers in finance, business or government if they didn’t rock the boat. Instead, they protested – not about tuition or workloads, but that too much of what they studied bore no relation to what was happening outside the classroom walls.

They launched an online petition demanding greater realism in economics teaching, less reliance on mathematics “as an end in itself” and more space for approaches beyond the dominant neoclassical model, including input from other disciplines, such as psychology, history and sociology. Their conclusion was that economics had become an “autistic science,” lost in “imaginary worlds.” They called their movement Autisme-economie.

The students’ timing is notable: It was the spring of 2000, when the world was still basking in the glow of “the Great Moderation,” when for most of a decade Western economies had been enjoying a prolonged period of moderate but fairly steady growth.

Some economists were daring to think the unthinkable – that their understanding of how advanced capitalist economies worked had become so sophisticated that they might finally have succeeded in smoothing out the destructive gyrations of capitalism’s boom-and-bust cycle. (“The central problem of depression prevention has been solved,” declared another Nobel laureate, Robert Lucas of the University of Chicago, in 2003 – five years before the greatest economic collapse in more than half a century.)

The students’ petition sparked a lively debate. The French minister of education established a committee on economic education. Economics students across Europe and North America began meeting and circulating petitions of their own, even as defenders of the status quo denounced the movement as a Trotskyite conspiracy. By September, the first issue of the Post-Autistic Economic Newsletter was published in Britain.

As The Independent summarized the students’ message: “If there is a daily prayer for the global economy, it should be, ‘Deliver us from abstraction.’”

It seems that entreaty went unheard through most of the discipline before the economic crisis, not to mention in the offices of hedge funds and the Stockholm Nobel selection committee. But is it ringing louder now? And how did economics become so abstract in the first place?

The great classical economists of the late 18th and early 19th centuries had no problem connecting to the real world – the Industrial Revolution had unleashed profound social and economic changes, and they were trying to make sense of what they were seeing. Yet Adam Smith, who is considered the founding father of modern economics, would have had trouble understanding the meaning of the word “economist.”

What is today known as economics arose out of two larger intellectual traditions that have since been largely abandoned. One is political economy, which is based on the simple idea that economic outcomes are often determined largely by political factors (as well as vice versa). But when political-economy courses first started appearing in Canadian universities in the 1870s, it was still viewed as a small offshoot of a far more important topic: moral philosophy.

In The Wealth of Nations (1776), Adam Smith famously argued that the pursuit of enlightened self-interest by individuals and companies could benefit society as a whole. His notion of the market’s “invisible hand” laid the groundwork for much of modern neoclassical and neo-liberal, laissez-faire economics. But unlike today’s free marketers, Smith didn’t believe that the morality of the market was appropriate for society at large. Honesty, discipline, thrift and co-operation, not consumption and unbridled self-interest, were the keys to happiness and social cohesion. Smith’s vision was a capitalist economy in a society governed by non-capitalist morality.

But by the end of the 19th century, the new field of economics no longer concerned itself with moral philosophy, and less and less with political economy. What was coming to dominate was a conviction that markets could be trusted to produce the most efficient allocation of scarce resources, that individuals would always seek to maximize their utility in an economically rational way, and that all of this would ultimately lead to some kind of overall equilibrium of prices, wages, supply and demand.

Political economy was less vital because government intervention disrupted the path to equilibrium and should therefore be avoided except in exceptional circumstances. And as for morality, economics would concern itself with the behaviour of rational, self-interested, utility-maximizing Homo economicus. What he did outside the confines of the marketplace would be someone else’s field of study.

As those notions took hold, a new idea emerged that would have surprised and probably horrified Adam Smith – that economics, divorced from the study of morality and politics, could be considered a science. By the beginning of the 20th century, economists were looking for theorems and models that could help to explain the universe. One historian described them as suffering from “physics envy.” Although they were dealing with the behaviour of humans, not atoms and particles, they came to believe they could accurately predict the trajectory of human decision-making in the marketplace.

In their desire to have their field be recognized as a science, economists increasingly decided to speak the language of science. From Smith’s innovations through John Maynard Keynes’s work in the 1930s, economics was argued in words. Now, it would go by the numbers.

The turning point came in 1947, when Paul Samuelson’s classic book Foundations of Economic Analysis for the first time presented economics as a branch of applied mathematics. Without “the invigorating kiss of mathematical method,” Samuelson maintained, economists had been practising “mental gymnastics of a particularly depraved type,” like “highly trained athletes who never run a race.” After Samuelson, no economist could ever afford to make that mistake.

And that may have been the greatest mistake of all: In a post-crisis, 2009 essay in The New York Times Magazine, Princeton economist and Nobel laureate Paul Krugman wrote, “The central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that gave economists a chance to show off their mathematical prowess.”

Of course, nothing says science like a Nobel Prize. Prizes in chemistry, physics and medicine were first awarded in 1901, long before anyone would have thought that economics could or should be included. But by the late 1960s, the central bank of Sweden was determined to change that, and when the Nobel family objected, the bank agreed to put up the money itself, making it the only one of the prizes to be funded by taxpayers.

Officially, then, it is known as the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel – but that title is rarely used. On Monday morning, Prof. Sargent and Princeton University Prof. Sims were widely reported to have won the Nobel Prize in Economics.

The confusion is understandable, and deliberate, according to Philip Mirowski, an economic historian at the University of Notre Dame. “It’s part of the PR trick,” Prof. Mirowski argues. Awarding the economics prize immediately after the prizes for physics, chemistry and medicine helps to place economics on the same level as those other natural sciences.

The prize also has helped to transform one particular ideology into economic orthodoxy. Prof. Mirowski, who is co-writing a book on the history of the economics prize, notes that throughout the 1970s and 1980s, economists whose work supported neoclassical, pro-market, laissez-faire ideas won a disproportionate number of those honours, as well as support from the increasing numbers of well-funded think tanks and foundations that cleaved to the same lines. People who rejected those ideas, or were skeptical of the natural sciences model, were quickly marginalized, and their road to academic advancement often blocked.

The result was a homogenization of economic thought that Prof. Mirowski believes “has been pretty deleterious for economics on the whole.”

The road to hell is paved with good intentions,

rational expectations and efficient markets

Many critics of neo-classical economics argue that it has a powerful pro-market bias that’s provided an intellectual justification for politicians ideologically disposed to reduce government involvement in the economy.

The rational-expectations model, for example, assumes that consumers and producers all inform themselves with all available data, understand how the world around them operates and will therefore respond to the same stimulus in essentially the same way. That allows economists to mathematically forecast how these “representative” consumers and producers would behave.

During a recession, say, a well-meaning government might want to enhance benefits for the unemployed. Prof. Sargent, for one, would caution against that, because a “rational” unemployed worker might then calculate that it’s better to reject a lower-paying job. He’s blamed much of the chronically high unemployment in some European countries on the presence of an army of voluntarily unemployed workers, and spoken out against the Obama administration’s recent efforts to extend unemployment benefits.

Indeed, under the rational-expectations model, most market interventions by governments and central banks wind up looking counterproductive.

Meanwhile, the efficient-markets hypothesis, developed by University of Chicago economist Eugene Fama in the 1970s, has dominated thinking about financial markets. It posits that the prices of stocks and other financial assets are always “efficient” because they accurately reflect all the available information about economic fundamentals.

By this reasoning, there can be no speculative price bubbles or busts in the stock or housing markets, and speculators with evil intentions cannot successfully manipulate markets. Conveniently, since markets are self-stabilizing, there’s no need for government regulation of them.

Critics point out that both these theories tend to ignore what John Maynard Keynes called the “animal spirits” – playing down human irrationality, inefficiency, venality and ignorance. Those are qualities that are hard to plug into a mathematical equation that purports to model human behaviour.

These models also have failed to take into account the profound changes wrought by globalization, and the growing importance of banks, hedge funds and other financial institutions. Yet they have successfully provided a “scientific” cover for an anti-regulatory political agenda that is popular on Wall Street and in some Washington political circles.

Inside jobs: Pay no attention to that banker behind the curtain

The Great Depression of the 1930s led many economists of the day to question some of their discipline’s most fundamental assumptions and produced a decades-long heyday for Keynesian economics. So far, the Great Recession has led to less of a fundamental shift.

Notre Dame’s Prof. Mirowski believes that more rethinking is necessary. “Everyone thought the banks would have to change their behaviour, but they got bailed out and nothing changed. The economics profession has also been bailed out because it is so highly interlinked with the financial profession, so of course they don’t change. Why would they change?”

Indeed, economics may be the dismal science, but there is nothing dismal about the payoffs for those at the top of the heap serving as advisers and consultants and sitting on various boards. Unlike some disciplines, economics has no guidelines governing conflict of interest and disclosure.

In 2010, the Academy Award-winning documentary Inside Job exposed several disturbing examples of academic economists calling for deregulation while working for financial-services companies. And in a study of 19 prominent financial economists, published last year by the Political Economy Research Institute at the University of Massachusetts Amherst, 13 were found to own stock or sit on the boards of private financial institutions, but in only four cases were those affiliations revealed when they testified or wrote op-eds concerning financial regulation.

This year, the American Economics Association agreed to set up a committee to investigate whether economists should develop ethical guidelines similar to those already in place for sociologists, psychologists, statisticians and anthropologists.

But there appears to be little enthusiasm for the idea among mainstream economists. Prof. Lucas of the University of Chicago, in an interview with The New York Times, objected: “What disciplines economics, like any science, is whether your work can be replicated. It either stands up or it doesn’t. Your motivations and whatnot are secondary.”

Several billion pennies for their thoughts

The critics, however, are more numerous and considerably better financed than the French students a decade ago. In October, 2009, billionaire financier George Soros said that “the current paradigm has failed.” He resolved to help save economics from itself. He pledged $50-million toward the establishment of the New York-based Institute for New Economic Thinking (INET), with a mandate to promote changes in economic theory and practice through conferences, grants and campaigns for graduate and undergraduate education reforms.

Perry Mehrling, a professor of economics at New York’s Columbia University, is the chair of the curriculum task force at INET. He says his graduate students at Columbia are growing increasingly frustrated by at the tendency to define the discipline by its tools instead of its subject matter – like the students in Paris a decade ago, they find little relationship between the mathematical models in class and the world outside the door.

Prof. Mehrling believes that economics education has become far too insular. Never mind cross-disciplinary study – even courses in economic history and the history of economic thought have all but disappeared, so students spend almost no time reading Smith, Keynes or other past masters.

“It’s not just that we’re not listening to sociologists,” Prof. Mehrling laments. “We’re not even listening to economists.”

He says he has no problem with teaching efficient-markets and rational-expectations theories, but as hypothesis, not catechism. “I object to the idea that these are articles of faith and if you don’t accept them, you are not a member of the tribe. These things need to be questioned and we need a broader conversation.”

The challenge, as Columbia University economist Joseph Stiglitz said at the opening conference of INET, is that “we need better theories of persistent deviations from rationality.”

Some of those theories are coming from the rapidly growing field of behavioural economics, which borrows insights about human motivation from cognitive psychology: A paper titled The Hubris Hypothesis of Corporate Takeovers, for example, examines how the egos of ambitious chief executive officers can lead them to pursue takeovers, even when all available evidence suggests that the move could be a disaster.

It is not yet clear how such new approaches can evolve into workable models, but they hint at what a post-autistic economics might look like.

Prof. Mehrling is cautiously optimistic. “There’s a recognition that things we thought were true aren’t necessarily true,” he argues, “and the world is more complicated and interesting than we thought – so all bets are off, and that’s exciting intellectually.”

Change comes slowly in academia. The few jobs that are available don’t generally go to people who challenge orthodoxy. But over the next decade, as the post-crash crop of economics students make their impact felt in government, business and schools, the lessons learned may well seep into the mainstream.

Theories based on assumptions of rationality, efficiency and equilibrium in the marketplace are likely to be treated with a great deal more skepticism. Homo economicus is a lot more anxious, irrational, unpredictable and complex than most economists believed. And, as Adam Smith recognized, he has a moral and ethical dimension that should not be ignored.

Today, the Post-Autistic Economic Network continues to publish its newsletter, now known as the Real-World Economic Review. It remains a thorn in the side of mainstream economics. In an editorial in January, 2010, the editors called for major economics organizations to censure those economists who “through their teachings, pronouncements and policy recommendations facilitated the global financial collapse” and pointed to the “continuing moral crisis within the economics profession.”

It is unlikely that Prof. Sargent will acknowledge any of this when he travels to Stockholm to accept his (sort of) Nobel Prize in December. Nor is he likely to speak about what role, if any, his models really might have played in “wrecking the world.”

But he did make one concession in his interview with the Nobel website this week: “Many of the practical problems are ahead of where the models are,” he admitted. “That’s life.”

Ira Basen is a radio producer, journalist and educator based in Toronto.

Rick Perry officials spark revolt after doctoring environment report (The Guardian)

Scientists ask for names to be removed after mentions of climate change and sea-level rise taken out by Texas officials

Suzanne Goldenberg, US environment correspondent
guardian.co.uk, Friday 14 October 2011 13.05 BST

Republican presidential hopeful Texas Gov. Rick Perry

Rick Perry’s administration deleted references to climate change and sea-level rise from the report. Photograph: Evan Vucci/AP

Officials in Rick Perry’s home state of Texas have set off a scientists’ revolt after purging mentions of climate change and sea-level rise from what was supposed to be a landmark environmental report. The scientists said they were disowning the report on the state of Galveston Bay because of political interference and censorship from Perry appointees at the state’s environmental agency.

By academic standards, the protest amounts to the beginnings of a rebellion: every single scientist associated with the 200-page report has demanded their names be struck from the document. “None of us can be party to scientific censorship so we would all have our names removed,” said Jim Lester, a co-author of the report and vice-president of the Houston Advanced Research Centre.

“To me it is simply a question of maintaining scientific credibility. This is simply antithetical to what a scientist does,” Lester said. “We can’t be censored.” Scientists see Texas as at high risk because of climate change, from the increased exposure to hurricanes and extreme weather on its long coastline to this summer’s season of wildfires and drought.

However, Perry, in his run for the Republican nomination, has elevated denial of science, from climate change to evolution, to an art form. He opposes any regulation of industry, and has repeatedly challenged the authority of the Environmental Protection Agency.

Texas is the only state to refuse to sign on to the federal government’s new regulations on greenhouse gas emissions. “I like to tell people we live in a state of denial in the state of Texas,” said John Anderson, an oceanography at Rice University, and author of the chapter targeted by the government censors.

That state of denial percolated down to the leadership of the Texas Commission on Environmental Quality. The agency chief, who was appointed by Perry, is known to doubt the science of climate change. “The current chair of the commission, Bryan Shaw, commonly talks about how human-induced climate change is a hoax,” said Anderson.

But scientists said they still hoped to avoid a clash by simply avoiding direct reference to human causes of climate change and by sticking to materials from peer-reviewed journals. However, that plan began to unravel when officials from the agency made numerous unauthorised changes to Anderson’s chapter, deleting references to climate change, sea-level rise and wetlands destruction.

“It is basically saying that the state of Texas doesn’t accept science results published in Science magazine,” Anderson said. “That’s going pretty far.”

Officials even deleted a reference to the sea level at Galveston Bay rising five times faster than the long-term average – 3mm a year compared to .5mm a year – which Anderson noted was a scientific fact. “They just simply went through and summarily struck out any reference to climate change, any reference to sea level rise, any reference to human influence – it was edited or eliminated,” said Anderson. “That’s not scientific review that’s just straight forward censorship.”

Mother Jones has tracked the changes. The agency has defended its actions. “It would be irresponsible to take whatever is sent to us and publish it,” Andrea Morrow, a spokeswoman said in an emailed statement. “Information was included in a report that we disagree with.”

She said Anderson’s report had been “inconsistent with current agency policy”, and that he had refused to change it. She refused to answer any questions. Campaigners said the censorship by the Texas state authorities was a throwback to the George Bush era when White House officials also interfered with scientific reports on climate change.

In the last few years, however, such politicisation of science has spread to the states. In the most notorious case, Virginia’s attorney general Ken Cuccinelli, who is a professed doubter of climate science, has spent a year investigating grants made to a prominent climate scientist Michael Mann, when he was at a state university in Virginia.

Several courts have rejected Cuccinelli’s demands for a subpoena for the emails. In Utah, meanwhile, Mike Noel, a Republican member of the Utah state legislature called on the state university to sack a physicist who had criticised climate science doubters.

The university rejected Noel’s demand, but the physicist, Robert Davies said such actions had had a chilling effect on the state of climate science. “We do have very accomplished scientists in this state who are quite fearful of retribution from lawmakers, and who consequently refuse to speak up on this very important topic. And the loser is the public,” Davies said in an email.

“By employing these intimidation tactics, these policymakers are, in fact, successful in censoring the message coming from the very institutions whose expertise we need.”

Occupy Wall Street’s ‘Political Disobedience’ (N.Y. Times)

October 13, 2011, 4:15 PM

By BERNARD E. HARCOURT

Our language has not yet caught up with the political phenomenon that is emerging in Zuccotti Park and spreading across the nation, though it is clear that a political paradigm shift is taking place before our very eyes. It’s time to begin to name and in naming, to better understand this moment. So let me propose some words: “political disobedience.”

Occupy Wall Street is best understood, I would suggest, as a new form of what could be called “political disobedience,” as opposed to civil disobedience, that fundamentally rejects the political and ideological landscape that we inherited from the Cold War.

Civil disobedience accepted the legitimacy of political institutions, but resisted the moral authority of resulting laws. Political disobedience, by contrast, resists the very way in which we are governed: it resists the structure of partisan politics, the demand for policy reforms, the call for party identification, and the very ideologies that dominated the post-War period.

Occupy Wall Street, which identifies itself as a “leaderless resistance movement with people of many … political persuasions,” is politically disobedient precisely in refusing to articulate policy demands or to embrace old ideologies. Those who incessantly want to impose demands on the movement may show good will and generosity, but fail to understand that the resistance movement is precisely about disobeying that kind of political maneuver. Similarly, those who want to push an ideology onto these new forms of political disobedience, like Slavoj Zizek or Raymond Lotta, are missing the point of the resistance.

When Zizek complained last August, writing about the European protesters in the London Review of Books, that we’ve entered a “post-ideological era” where “opposition to the system can no longer articulate itself in the form of a realistic alternative, or even as a utopian project, but can only take the shape of a meaningless outburst,” he failed to understand that these movements are precisely about resisting the old ideologies. It’s not that they couldn’t articulate them; it’s that they are actively resisting them — they are being politically disobedient.

And when Zizek now declares at Zuccotti Park “that our basic message is ‘We are allowed to think about alternatives’ . . . What social organization can replace capitalism?” ― again, he is missing a central axis of this new form of political resistance.

One way to understand the emerging disobedience is to see it as a refusal to engage these sorts of  worn-out ideologies rooted in the Cold War. The key point here is that the Cold War’s ideological divide — with the Chicago Boys at one end and the Maoists at the other — merely served as a weapon in this country for the financial and political elite: the ploy, in the United States, was to demonize the chimera of a controlled economy (that of the former Soviet Union or China, for example) in order to prop up the illusion of a free market and to legitimize the fantasy of less regulation — of what was euphemistically called “deregulation.” By reinvigorating the myth of free markets, the financial and political architects of our economy over the past three plus decades — both Republicans and Democrats — were able to disguise massive redistribution toward the richest by claiming they were simply “deregulating” when all along they were actually reregulating to the benefit of their largest campaign donors.

This ideological fog blinded the American people to the pervasive regulatory mechanisms that are necessary to organize a colossal late-modern economy and that necessarily distribute wealth throughout society — and in this country, that quietly redistributed massive amounts of wealth to the richest 1 percent. Many of the voices at Occupy Wall Street accuse political ideology on both sides, on the side of free markets but also on the side of big government, for serving the few at the expense of the other 99 percent — for paving the way to an entrenched permissive regulatory system that “privatizes gains and socializes losses.”

A protest march through the financial district of New York on October 12.Lucas Jackson/ReutersA protest march through the financial district of New York on October 12.

The central point, of course, is that it takes both a big government and the illusion of free markets to achieve such massive redistribution. If you take a look at the tattered posters at Zuccotti Park, you’ll see that many are intensely anti-government and just as many stridently oppose big government.

Occupy Wall Street is surely right in holding the old ideologies to account. The truth is, as I’ve argued in a book, “The Illusion of Free Markets,” and recently in Harper’s magazine, there never have been and never will be free markets. All markets are man-made, constructed, regulated and administered by often-complex mechanisms that necessarily distribute wealth — that inevitablydistribute wealth — in large and small ways. Tax incentives for domestic oil production and lower capital gains rates are obvious illustrations. But there are all kinds of more minute rules and regulations surrounding our wheat pits, stock markets and economic exchanges that have significant wealth effects: limits on retail buyers flipping shares after an I.P.O., rulings allowing exchanges to cut communication to non-member dealers, fixed prices in extended after-hour trading, even the advent of options markets. The mere existence of a privately chartered organization like the Chicago Board of Trade, which required the state of Illinois to criminalize and forcibly shut down competing bucket shops, has huge redistributional wealth effects on farmers and consumers — and, of course, bankers, brokers and dealers.

The semantic games — the talk of deregulation rather than reregulation — would have been entertaining had it not been for their devastating effects. As the sociologist Douglas Massey minutely documents in “Categorically Unequal,” after decades of improvement, the income gap between the richest and poorest in this country has dramatically widened since the 1970s, resulting in what social scientists now refer to as U-curve of increasing inequality. Recent reports from the Census Bureau confirm this, with new evidence last month that “the number of Americans living below the official poverty line, 46.2 million people, was the highest number in the 52 years the bureau has been publishing figures on it.” Today, 27 percent of African-Americans and 26 percent of Hispanics in this country — more than 1 in 4 — live in poverty; and 1 in 9 African-American men between the ages of 20 and 34 are incarcerated.

It’s these outcomes that have pushed so many in New York City and across the nation to this new form of political disobedience. It’s a new type of resistance to politics tout court — to making policy demands, to playing the political games, to partisan politics, to old-fashioned ideology. It bears a similarity to what Michel Foucault referred to as “critique:” resistance to being governed “in this manner,” or what he dubbed “voluntary insubordination” or, better yet, as a word play on the famous expression of Etienne de la Boétie, “voluntary unservitude.”

If this concept of “political disobedience” is accurate and resonates, then Occupy Wall Street will continue to resist making a handful of policy demands because it would have little effect on the constant regulations that redistribute wealth to the top. The movement will also continue to resist Cold War ideologies from Friedrich Hayek to Maoism — as well as their pale imitations and sequels, from the Chicago School 2.0 to Alain Badiou and Zizek’s attempt to shoehorn all political resistance into a “communist hypothesis.”

On this account, the fundamental choice is no longer the ideological one we were indoctrinated to believe — between free markets and controlled economies — but rather a continuous choice between kinds of regulation and how they distribute wealth in society. There is, in the end, no “realistic alternative,” nor any “utopian project” that can avoid the pervasive regulatory mechanisms that are necessary to organize a complex late-modern economy — and that’s the point. The vast and distributive regulatory framework will neither disappear with deregulation, nor with the withering of a socialist state. What is required is constant vigilance of all the micro and macro rules that permeate our markets, our contracts, our tax codes, our banking regulations, our property laws — in sum, all the ordinary, often mundane, but frequently invisible forms of laws and regulations that are required to organize and maintain a colossal economy in the 21st-century and that constantly distribute wealth and resources.

In the end, if the concept of “political disobedience” accurately captures this new political paradigm, then the resistance movement needs to occupy Zuccotti Park because levels of social inequality and the number of children in poverty are intolerable. Or, to put it another way, the movement needs to resist partisan politics and worn-out ideologies because the outcomes have become simply unacceptable. The Volcker rule, debt relief for working Americans, a tax on the wealthy — those might help, but they represent no more than a few drops in the bucket of regulations that distribute and redistribute wealth and resources in this country every minute of every day. Ultimately, what matters to the politically disobedient is the kind of society we live in, not a handful of policy demands.

Secitece promove I Fórum “Ceará Faz Ciência” (Funcap)

POR ADMIN, EM 13/10/2011

Com a Assessoria de Comunicação da Secitece

O evento será realizado nos dias 17 e 18 de outubro, no auditório do Planetário do Centro Dragão do Mar de Arte e Cultura.

Nos dias 17 e 18 de outubro, a Secretaria da Ciência Tecnologia e Educação Superior (Secitece), realizará o “I Fórum Ceará Faz Ciência”, com o tema” Mudanças climáticas, desastres naturais e prevenção de riscos”. A iniciativa integra a programação estadual da Semana Nacional de Ciência e Tecnologia.

O secretário da Ciência e Tecnologia, René Barreira, fará a abertura do evento, dia 17, às 17h, no auditório do Planetário Rubens de Azevedo. Na ocasião, será prestada homenagem ao pesquisador cearense Expedito Parente, conhecido como o pai do biodiesel, que faleceu em setembro.

No dia 18, partir das 9h, as atividades serão retomadas com as seguintes palestras: “Onda gigante no litoral brasileiro. É possível?”, com o prof. Francisco Brandão, chefe do Laboratório de Sismologia da Coordenadoria Estadual de Defesa Civil, e “As quatro estações do ano no Ceará: perceba suas interferências na fisiologia e no meio ambiente”, ministrada por Dermeval Carneiro, prof. de Física e Astronomia, presidente da Sociedade Brasileira dos Amigos da Astronomia e diretor do Planetário Rubens de Azevedo – Dragão do Mar.

No período da tarde, a partir das 14h30, será a vez da palestra “Desastres Naturais: como prevenir e atuar em situações de risco”, com o Tenente Coronel Leandro Silva Nogueira, secretário Executivo da Coordenadoria Estadual de Defesa Civil. Para finalizar o Fórum, a engenheira agrônoma do Departamento de Recursos Hidricos e Meio Ambiente da Fundação Cearense de Meteorologia e Recursos Hídricos (Funceme), Sonia Barreto Perdigão, ministrará palestra sobre “Mudanças Climáticas e Desertificação no Ceará”, às 16h30.

Os interessados em participar do I Fórum “Ceará Faz Ciência”, a ser realizado nos dias 17 e 18/10, no Dragão do Mar, em Fortaleza, devem fazer sua pré-inscrição. O formulário a ser preenchido está disponível no site da Secitece. A participação é gratuita.

Serviço
I Fórum Ceará Faz Ciência
Data: 17 e 18 de outubro de 2011
Local: Auditório do Planetário Rubens de Azevedo
Informações: (85) 3101-6466
Inscrições gratuitas.

Will the information superhighway turn into a cul-de-sac because of automated filters? (The Wall Street Journal)

BOOKSHELFMAY 20, 2011
Your Results May Vary

By PAUL BOUTIN

Last year Eli Pariser, president of the board of the liberal-activist site MoveOn.org, had a shocking realization. A heavy Facebook user, he had become friends—at least on Facebook—with an assortment of conservative thinkers and pundits. As a serious thinker, he wanted to have his opinions on current events challenged by those with opposing political ideologies.

But it struck Mr. Pariser one day that he hadn’t seen a single status update from any of the loyal opposition in a while. Had his sources of conservative thought stopped posting? Had they unfriended him? No, Facebook had quietly stopped inserting their updates into his news feed on the site. Had the social-networking giant figured out that he was a liberal?

It turned out that Facebook had changed the algorithm for its news feeds, in response to its users’ complaints that they were being overwhelmed by updates from “friends” whom they hardly knew. The 600-million-member social network now filters status updates so that, by default, users see only those from Facebook friends with whom they’ve recently interacted—say, by sending a message or commenting on a friend’s post.

For Mr. Pariser, the algorithm change meant that his news feed was filtered to let him know about only the mostly left-leaning people with whom he bantered, leaving out conservative voices that he simply monitored. Facebook’s algorithm has no political parameters, but for Mr. Pariser it effectively muffled the people he most disagreed with but wanted to hear.

This sifting-out of seemingly dead connections—which might strike many people as a wonderful service—spurred Mr. Pariser to undertake a months-long exploration of the growing trend of personalized content on websites. In “The Filter Bubble,” he recounts what he found. “I was struck by the degree to which personalization is already upon us. Not only on Facebook and Google, but on almost every major site on the Web.”

It’s no secret that Amazon, for example, customizes its pages to suggest products that are most likely to be of interest, based on shoppers’ past purchases. But most Google users don’t realize that, since 2009, their search results have been gradually personalized based on the user’s location, search history and other parameters. By tracking individual Web browsers with cookies, Google has been able to personalize results even for users who don’t create a personal Google account or are not logged into one. Mr. Pariser asked two friends to search for “BP” shortly after the Deepwater Horizon oil spill last year. The two were shown strikingly different pages—one full of news about the disaster, the other mostly investor information about the energy company.

Personalization is meant to make Internet users happy: It shows them information that mathematical calculations indicate is more likely than generalized content to be of interest. Google’s personalized search results track dozens of variables to deliver the links that a user is predicted to be most likely to click on. As a result, Google users click on more of the results that they get. That’s good for Google, good for its advertisers, good for other websites and presumably good for the user.

But Mr. Pariser worries that there’s a dark downside to giving people their own custom version of the Internet. “Personalization isn’t just shaping what we buy,” he writes. “Thirty-six percent of Americans under thirty get their news through social networking sites.” As we become increasingly dependent on the Internet for our view of the world, and as the Internet becomes more and more fine-tuned to show us only what we like, the would-be information superhighway risks becoming a land of cul-de-sacs, with each of its users living in an individualized bubble created by automated filters—of which the user is barely aware.

To Mr. Pariser, these well-intended filters pose a serious threat to democracy by undermining political debate. If partisans on either side of the issues seem uninterested in the opposition’s thinking nowadays, wait until Google’s helpful sorters really step up their game.

Through interviews with influential Internet experts including Google News chief Krishna Bharat, Search Engine Land editor Danny Sullivan and Microsoft researcher Danah Boyd, Mr. Pariser exposes the problem with personalization: It’s hard enough for an army of researchers to create algorithms that can match each of us with things we like. It’s nearly impossible, by contrast, to craft a formula that will show us something we wouldn’t seek out but really ought to read—and will be glad we did. Beyond throwing random links onto a screen, it’s hard to model serendipity on a computer.

And there’s another problem with filters: People like them. The Internet long ago became overwhelming. Filters help make it manageable without our having to do the work of sorting through its content entirely by ourselves.

What to do? Mr. Pariser’s opening argument in “The Filter Bubble” is a powerful indictment of the current system. But his closing chapters fumble around in search of a solution—from individuals, from companies like Google or from government oversight. How do you tell the Internet to back it off a bit on the custom content?

For now, the best Mr. Pariser can hope for is to educate readers who don’t want to live in a solipsistic subset of the Internet, especially regarding political matters. Just knowing that Google and Facebook personalize what you see, and that you can turn it off if you want—on Facebook, click Most Recent instead of Top News atop your feed; for Google, get instructions by searching “deleting Web history”—is a good start. “The Filter Bubble” is well-timed: The threat is real but not yet pandemic. Major news sites are toying with personalization but haven’t rolled it out en masse. And in a test I conducted myself, I enlisted a handful of heavy Google users across America to search for “Bin Laden raid” soon after the event. The search results that came back were all nearly identical. To tell the truth, we were kind of disappointed.

Mr. Boutin writes about Internet technology and culture for MIT Technology Review, Wired and the New York Times.

The Filter Bubble
By Eli Pariser
The Penguin Press, 294 pages, $25.95

Making Funny with Climate Change (The Yale Forum on Climate Change & The Media)

Keith Kloor   September 30, 2011

Comedy may be able to make inroads with audiences in ways that ‘serious journalism’ often cannot. With an issue as serious as climate science suggests, communicators should not shy from taking the risks of injecting humor as appropriate.

 

Last week, Colorado-based science journalist Michelle Nijhuis lamented the standard environmental news story. She wrote:

“Environmental journalists often feel married to the tragic narrative. Pollution, extinction, invasion: The stories are endless, and endlessly the same. Our editors see the pattern and bury us in the back pages; our readers see it and abandon us on the subway or in the dentist’s office.”

 

Commentary 

A welcome exception to this rule, Nijhuis noted, was New Yorker writer Ian Frazier, who has injected humor into the many environmentally themed nonfiction pieces he’s penned over the years.

This might also be the key to the success of Carl Hiaasen‘s best-selling novels. There is nothing new about the sleazy politics and environmental destruction that are regular themes of his books. But it gets digested through wickedly funny scenes and lampooned characters. There are no sacred cows, either. Tree huggers and traditional eco-villains get equally caricatured.

Writers have had a harder time using humor to communicate global warming. In the non-fiction universe, there are no Ian Fraziers tackling the issue in a quirky, sideways manner. Journalists in mainstream media treat the topic somberly and dutifully. Exhaustion may be setting in for some. Recently NPR’s Robert Krulwich wrote:

“I got a call the other day from some producer I very much admire. They wanted to talk about a series next year on global warming and I thought, why does this subject make me instantly tired? Global warming is important, yes; controversial, certainly; complicated (OK by me); but somehow, even broaching this subject makes me feel like someone’s putting heavy stones in my head.”

But if reporters are getting jaded, TV writers and comedians are eagerly joining the fray. Recent satirical novels by acclaimed writers, such as Jonathan Franzen and Ian McEwan have also tackled climate change.

Whether any of these pop culture and high-minded literary endeavors is influencing attitudes is impossible to know. Still, some climate communicators see humor as their best chance to make climate issues resonate with the public at large, though the tact can be a double-edged sword, as one climate campaigner notes:

“Humor’s capacity for radical imagination creates a mental space for potential change but also comes with a loss of control as it breaks taboos and turns the order of reality upside down and inside out. Indeed, because of this ability to destabilize the established order, George Orwell stated that every joke is a tiny revolution. It denudes power of its authority, which is true of those that we oppose but also those that we cherish. Using humor to communicate on climate change means that scientists and environmentalists lose the monopoly on framing climate change and even risk becoming the butt of the joke. However uncomfortable, this may be necessary if we truly want the public at large to take ownership of the issue.”

That some attempts at humor can backfire has already been demonstrated. But if the stakes are as high as climate science suggests, then that’s a risk climate communicators should not be afraid to take.

Keith Kloor

Keith Kloor is a New York City-based freelance journalist who writes often about the environment and climate change. (E-mail: keith@yaleclimatemediaforum.org)

A Map of Organized Climate Change Denial (Dot Earth, N.Y. Times)

October 2, 2011, 3:51 PM

By ANDREW C. REVKIN

Oct. 3, 9:00 p.m. | Updated 
A chart of “key components of the climate change denial machine” has been produced by Riley E. Dunlap, regents professor of sociology at Oklahoma State University, and Aaron M. McCright, an associate professor of sociology at Michigan State University. The diagram below (reproduced here with permission) is from a chapter the two researchers wrote on organized opposition to efforts to curb greenhouse gases for the new Oxford Handbook of Climate Change and Society.
That there are such well-financed and coordinated efforts is not contentious. And this is not the first attempt to map them.

But it’s important to keep in mind that not everyone skeptical of worst-case predictions of human-driven climate disruption, or everyone opposed to certain climate policies, is part of this apparatus.

And there’s plenty to chart on the other edge of the climate debate — thosegroups and outlets pursuing a traditional pollution-style approach to greenhouse gases.

[Oct. 3, 9:00 p.m. | Updated As it happens, the blogger behind Australian Climate Madness has posted a skeptics’ map of “the climate alarmism machine.” (see below) I think some, though by no means all, aspects of the map are not bad. But, as with so much of the climate debate, it is an overdrawn, overblown caricature of reality.]

It’s also important to examine whether a world without such efforts — in which citizens had a clear view of both what is known, and uncertain, about the human factor in shaping climate-related risks — would appreciably change. Some insist the answer is yes. Given the deep-rooted human bias tothe near and now and other aspects of our “inconvenient mind,” I’m not nearly so sure (although this doesn’t stop me from working on this challenge, of course).

The Folly of Prediction: Full Transcript (Freakonomics.com)

FREAKONOMICS

06/30/2011 | 4:58 pm

Stephen J. DUBNER: What does it mean to be a witch exactly in Romania? Are these people that we know here as psychics or fortunetellers, or are they different somehow?

Vlad MIXICH: I don’t know how is the fortuneteller in the United States. But here generally they are a woman of different ages. They can–they say they can cure some diseases. They can bring back your husband or your wife. Or they can predict your future.

DUBNER: Who is a typical client for a witch?

MIXICH: There are quite a lot of politicians who are going to witches. You know the French president, Nicolas Sarkozy, he went to witches last year. And our president in Romania, and very important politicians from different parties, they are going to witches. Some of them they were obliged to recognize they went to witches. Some of them it’s an off-the-record information. But me being a journalist, I know that information.

DUBNER: Vlad Mixich is a reporter in Bucharest, the capital of Romania. He knows a good bit about the witches there.

MIXICH: Quite a lot of them they are quite rich. They have very big houses with golden rooftops. A lot of the Romanians, they are living in small apartments in blocks. So, just going in such a building will give you a sense of majesty and respect.

DUBNER: But the Romanian witch industry has been under attack. First came a proposed law to regulate and tax the witches. It passed in one chamber of Parliament before stalling out. But then came another proposal arguing that witches should be penalized if the predictions they make don’t turn out to be true.

MIXICH: So if you are one of my clients, and if I’m a fortune teller, if I fail to predict your future, I pay a quite substantial fine to the state, or if this happens many times, I will even go to jail. The punishment is between six months and three years in jail.

DUBNER: What’s being proposed in Romania is revolutionary. It strikes me because we typically don’t hold anybody accountable for bad predictions. So, I’m wondering in Romania, let’s say, if a politician makes a bad prediction, do they get fined or penalized in any way?

MIXICH: No, not at all. In fact this is one of the hobbies of our president. He’s doing a lot of predictions, which are not coming true, of course. And after that he is reelected! Or his popularity is rising, like the sun in the morning, you know? No, anyone can do publicly a lot of predictions here in eastern Europe and not a single hair will move from his or her head.

DUBNER: C’mon people, that doesn’t seem fair, does it? I don’t care if you’re anti-witch or pro-witch or witch-agnostic. Why should witches be the only people held accountable for bad predictions? What about politicians and money managers and sports pundits? And what about you?

[THEME]

ANNOUNCER: From WNYC and APM, American Public Media, this is Freakonomics Radio. Today: The Folly of Prediction. Here’s your host, Stephen Dubner.

DUBNER: All of us are constantly predicting the future, whether we think about it or not. Right now, some small part of your brain is trying to predict what this show is going to be about. How do you do that? You factor in what you’ve heard so far. What you know about Freakonomics. Maybe you know a lot, maybe you’ve never heard of it, you might think it’s some kind of communicable disease! When you predict the future, you look for cognitive cues, for data, for guidance. Here’s where I go for guidance.

Steven LEVITT: I think to an economist, the best explanation for why there are so many predictions is that the incentives are set up in order to encourage predictions.

DUBNER: That’s Steve Levitt. He’s my Freakonomics friend and co-author, an economist at the University of Chicago.

LEVITT: So, most predictions we remember are ones which were fabulously, wildly unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. If you look at all the people, the economists, who talked about the financial crisis ahead of time, those guys harp on it constantly. “I was right, I was right, I was right.” But if you’re wrong, there’s no person on the other side of the transaction who draws any real benefit from embarrassing you by bring up the bad prediction over and over. So there’s nobody who has a strong incentive, usually, to go back and say, Here’s the list of the 118 predictions that were false. I remember growing up, my mother, who is somewhat of a psychic–

DUBNER: Wait, somewhat of a psychic?

LEVITT: She’s a self-proclaimed psychic. And she would predict a stock market crash every single year.

DUBNER: And she’s been right a couple times.

LEVITT: And she has been. She’s been right twice in the last 15 years, and she would talk a lot about the times she was right. I would have to remind her about the 13 times that she was wrong. And without any sort of market mechanism or incentive for keeping the prediction makers honest, there’s lots of incentive to go out and to make these wild predictions. And those are the ones that are remembered and talked about. Think of about one of the predictions that you hear echoed more often than just about any one is Joe Namath’s famous pronouncement about how the Jets were going to win the Super Bowl. And it was unexpected. And it happened. And if the Jets had lost the Super Bowl, nobody would remember that Joe Namath made that pronouncement.

DUBNER: And conversely, you can probably find at least one player on every team that’s lost the Super Bowl in the last forty years that did predict that his team would win.

LEVITT: That’s probably right. That’s exactly right. Now, the flip side, which is perhaps surprising, is that in many cases the goal of prediction is to be completely within the pack. And so I see this a lot with pension fund managers, or endowment managers, which is if something goes wrong then as long as everybody else made the same prediction, you can’t be faulted very much.

DUBNER: Pension managers. Football players. Psychic moms. Romanian witches. Who doesn’t try to predict the future these days?

[SOUND MONTAGE OF PREDICTIONS]

DUBNER: And you know the worst thing? There’s almost nobody keeping track of all those predictions! Nobody … except for this guy …

Philip TETLOCK: Well, I’m a research psychologist, who …

DUBNER: Don’t forget your name, though.

TETLOCK: I’m Phil Tetlock and I’m a research psychologist. I spent most of career at the University of California, Berkeley, and I recently moved to the University of Pennsylvania where I’m cross- appointed in the Wharton School and the psychology department.

DUBNER: Philip Tetlock has done a lot of research on cognition and decision-making and bias, pretty standard stuff for an Ivy League psych PhD. But what really fascinates him is prediction.

TETLOCK: There are a lot of psychologists who believe that there is a hard-wired human need to believe that we live in a fundamentally predictable and controllable universe. There’s also a widespread belief among psychologists that people try hard to impose causal order on the world around them, even when those phenomena are random.

DUBNER: This hardwired human need, as Tetlock puts it, has created what he calls a prediction industry. Now, don’t sneer. You’re part of it, too.

TETLOCK: I think there are many players in what you might count the prediction industry. In some sense we’re all players in it. Whenever we go to a cocktail party, or a colloquium, or whatever where opinions are being shared, we frequently make likelihood judgments about possible futures. And the truth or falsity of particular claims about futures. The prediction business is a big business on Wall Street, and we have futures markets and so forth designed to regulate speculation in those areas. Obviously, government has great interest in prediction. They create large intelligence agency bureaucracies and systems to help them achieve some degree of predictability in a seemingly chaotic world.

DUBNER: Let me read something that you have said or written in the past. “This determination to ferret out order from chaos has served our species well. We’re all beneficiaries of our great collective successes in pursuit of deterministic regularities in messy phenomena — agriculture, antibiotics, and countless other inventions.” So talk to me for a moment about the value of prediction. Obviously there’s much has been gained, much to be gained. Do we overvalue prediction though, perhaps?

TETLOCK: I think there’s an asymmetry of supply and demand. I think there is an enormous demand for accurate predictions in many spheres of life in which we don’t have the requisite expertise to deliver. And when you have that kind of gap between demand and real supply you get the infusion of fake supply.

DUBNER: “Fake supply.” I like this guy, this Philip Tetlock. He’s not an economist, but he knows the laws of supply and demand can’t just be revoked. So if there’s big demand for prediction in all realms of life, and not enough real supply to satisfy it, what does this “fake supply” sound like?

[SOUND MONTAGE OF COULDS]

DUBNER: There’s a punditocracy out there, a class of people who predict ad nauseam, often on television. They can be pretty good at making their predictions tough to audit.

TETLOCK: It’s the art of appearing to go out on a limb without actually going out on a limb. For example, the word “could,” something “could” happen, the room you happen to be sitting in could be struck by a meteor in the next 23 seconds. That makes perfect sense, but the probability of course is point zero, zero, zero, zero, et cetera, one. It’s not zero, but it’s extremely low. In fact, the word “could,” the possible meanings people attach to it range from a 0.01 to a .6, which covers more than half the probability scale right there.

DUBNER: Look, nobody likes a weasel. So more than 20 years ago, Tetlock set out to conduct one of the largest empirical studies, ever, of predictions. He chose to focus on predictions about political developments around the world. He enlisted some of the world’s foremost experts — the kind of very smart people who have written definitive books, who show up on CNN or on the Times’s op-ed page.

TETLOCK: In the end we had close to three hundred participants. And they were very sophisticated political observers. Virtually all of them had some post-graduate education. Roughly two-thirds of them had PhDs. They were largely political scientists, but there were some economists and a variety of other professionals as well.

DUBNER: And they all participated in your study anonymously, correct?

TETLOCK: That was a very important condition for obtaining cooperation.

DUBNER: Now, if they were not anonymous then presumably we would recognize some of their names, these are prominent people at political science departments, economics departments at I’m guessing some of the better universities around the world, is that right?

TETLOCK: Well, I don’t want to say too much more, but I think you would recognize some of them, yes. I think some of them had substantial Google counts.

SJD NARR: The study became the basis of a book Tetlock published a few years ago, called “Expert Political Judgment.” There were two major rounds of data collection, the first beginning in 1988, the other in 1992. These nearly 300 experts were asked to make predictions about dozens of countries around the world. The questions were multiple choice. For instance: In Democracy X — let’s says it’s England — should we expect that after the next election, the current majority party will retain, lose, or strengthen its status? Or, for Undemocratic Country Y — Egypt, maybe — should we expect the basic character of the political regime to change in the next five years? In the next 10 years? and if so, in what direction? And to what effect? The experts made predictions within their areas of expertise, and outside; and they were asked to rate their confidence for their predictions. So after tracking the accuracy of about 80,000 predictions by some 300 experts over the course of 20 years, Philip Tetlock found:

TETLOCK: That experts thought they knew more than they knew.That there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing.

DUBNER: Let me translate that for you. The experts were pretty awful. And you think: awful compared to what? Did they beat a monkey with a dartboard?

TETLOCK: Oh, the monkey with a dartboard comparison, that comes back to haunt me all the time. But with respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. Did they do better than an extrapolation algorithm? No, they did not. They did for the most part a little bit worse than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope.

DUBNER: That “extrapolation algorithm” that Tetlock mentioned? That’s simply a computer programmed to predict “no change in current situation.” So it turned out these smart, experienced, confident experts predicted the political future about as well, if not slightly worse, than the average daily reader of The New York Times.

TETLOCK: I think the most important takeaway would be that the experts are, they think they know more than they do. They were systematically overconfident. Some experts were really massively overconfident. And we are able to identify those experts based on some of their characteristics of their belief system and their cognitive style, their thinking style.

DUBNER: OK. So now we’re getting into the nitty-gritty of what makes people predict well or predict poorly. What are the characteristics then of a poor predictor?

TETLOCK: Dogmatism.

DUBNER: It can be summed up that easily?

TETLOCK: I think so. I think an unwillingness to change one’s mind in a reasonably timely way in response to new evidence. A tendency, when asked to explain one’s predictions, to generate only reasons that favor your preferred prediction and not to generate reasons opposed to it.

DUBNER: And I guess what’s striking to me and I’d love to hear what you had to say about this is that it’s easy to provide one word, prediction, to many, many, many different realms in life. But those realms all operate very differently — so politics is different from economics, and predicting a sports outcome is different than predicting, you know, an agricultural outcome. It seems that we don’t distinguish so much necessarily and that there’s this modern sense almost that anything can be and should be able to be predicted. Am I kind of right on that, or no?

TETLOCK: I think there’s a great deal of truth to that. I think it is very useful in talking about the predictability of the modern world to distinguish those aspects of the world that show a great deal of linear regularity and those parts of the world that seems to be driven by complex systems that are decidedly nonlinear and decidedly difficult if not impossible to predict.

DUBNER: Talk to me about a few realms that generally are very, very hard to predict, and a few realms that generally are much easier.

TETLOCK: Predicting Scandinavian politics is a lot easier than predicting Middle Eastern politics.

DUBNER: Yes, that was the first one that came to my mind too! All right, but keep going.

TETLOCK: The thing about the radically unpredictable environments is that they often appear for long periods of time to be predictable. So, for example, if you had been a political forecaster predicting regime longevity in the Middle East, you would have done extremely well predicting in Egypt that Mubarak would continue to be the president of Egypt year after year after year in much the same way that if you had been a Sovietologist you would have done very well in the Brezhnev era predicting continuity. There’s an aphorism I quote in the “Expert Political Judgment” book from Karl Marx. I’m obviously not a Marxist but it’s a beautiful aphorism that he had which was that, “When the train of history hits a curve, the intellectuals fall off.”

DUBNER: Coming up: Who do you predict we’ll hear from next — a bunch of people who are awesomely good at predicting the future? Yeah, right. Maybe later. First, we’ll hear some more duds — from Wall Street, the NFL, and … the cornfield.

[UNDERWRITING]

ANNOUNCER: From American Public Media and WNYC, this is Freakonomics Radio. Here’s your host, Stephen Dubner.

DUBNER: So Phillip Tetlock has sized up the people who predict the future–geopolitical change, for instance–and determined that they’re not very good at predicting the future. He also tells us that their greatest flaw is dogmatism–sticking to their ideologies even when presented with evidence that they’re wrong. You buy that? I buy it. Politics is full of ideology; why shouldn’t the people who study politics be a least a little bit ideological? So let’s try a different set of people, people who make predictions that, theoretically at least, have nothing to do with ideology. Let’s go to Wall Street.

[SOUND EFFECT: WALL STREET MONTAGE]

Christina FANG: I’m Christina Fang, a Professor of Management at New York University’s business school.

DUBNER: Christina Fang, like Philip Tetlock, is fascinated with prediction:

FANG: Well, I guess generally forecasting about anything, about technology, about a product, whether it will be successful, about whether an idea, a venture idea could take off, a lot of things, not just economic but also business in general.

DUBNER: Fang wasn’t interested in just your street-level predictions, though. She wanted to know about the Big Dogs, the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with a fellow researcher, Jerker Denrell, Fang gathered data from the Wall Street Journal’s Survey of Economic Forecasts. Every six months, the paper asked about 50 top economists to predict a set of macroeconomic numbers — unemployment, inflation, gross national product, things like that. Fang audited seven consecutive surveys, with an eye toward a particular question: when someone correctly predicts an extreme event — a market crash, maybe, or a sudden spike in inflation — what does that say about his overall forecasting ability?

FANG: In the Wall Street Journal survey if you look at the extreme outcomes, either extremely bad outcomes and extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they’re likely to have overall lower level of accuracy. In other words, they’re doing poorer in general.

SJD NARR: Uh-oh. You catching this?

FANG: Those people who happen to predict accurately the extreme events, we also look at their–they happen to also have a lower overall level of accuracy.

DUBNER: So I can be right on the big one but if I’m right on the big one I generally will tend to be more often wrong than the average person.

FANG: On average–

DUBNER: On average.

FANG: Across everyday predictions as well. And our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be.

DUBNER: So the people who make big, bold, correct predictions are in general worse than average at predicting the economic future. Now, why is this a problem? Maybe they’re just like home-run hitters — y’know, a lot of strikeouts but a lot of power too. All right, I’ll tell you why it’s a problem. Actually, I’ll have Steve Levitt tell you.

LEVITT: The incentives for prediction makers are to make either cataclysmic or utopian predictions, right? Because you don’t get attention if I say that what’s going to happen tomorrow is exactly as what’s going to happen today…

DUBNER: You don’t get on TV.

LEVITT: I don’t get on TV. If it happens to come true, who cares? I don’t get any credit for it coming true either.

DUBNER: There’s a strong incentive to make extreme predictions; because, seriously, who tunes in to hear some guy say that “Next year will be pretty much like last year”? And once you have been right on an extreme forecast — let’s say you predicted the 2008 market crash and the Great Recession — even if you were predicting it every year, like Steve Levitt’s mother — you’ll still be known as The Guy Who Called the Big One. And even if all your followup predictions are wrong, you still got the Big One right. Like Joe Namath.

All right, look. Predicting the economy? Predicting the political future? Those are hard. Those are big, complex systems with lots of moving parts. So how about football? If you’re an NFL expert, how hard can it be to forecast, say, who the best football teams will be in a given year? We asked Freakonomics researcher Hayes Davenport to run the numbers for us:

Hayes DAVENPORT: Well, I looked at the past three years of expert picking from the major NFL prediction outlets, which are USA Today, SportsIllustrated.com and ESPN.com. We looked at a hundred and five sets of picks total. They’re picking division winners for each year, as well as the wild card for that year. So they’re basically picking the whole playoff picture for that year.

DUBNER: So talk about just kind of generally the degree of difficulty of making this kind of a pick.

DAVENPORT: Well, if you’re sort of an untrained animal, making NFL picks, you’re going to have about a twenty-five percent chance of picking each division correctly because there are only four teams.

DUBNER: All right so Hayes, you’re saying that an untrained animal would be about twenty five percent accurate if you pick one out of four. But what about a trained animal, like a me, a casual fan? How do I do compared to the experts?

DAVENPORT: Right. So if you’re cutting off the worst team in each division, if you’re not picking among those you’ll be right, thirty-three percent of the time, one in three, and the experts are right about thirty-six percent of the time, so just a little better than that.

DUBNER: OK, so if you’re saying they’re picking about thirty-six percent accuracy, and I or someone by chance would pick at about thirty three-percent accuracy. So that’s a three percentage point improvement, or about a ten percent better, maybe we should say, you know, that’s not bad. If you beat the stock market by ten percent every year you’d be doing great. So are these NFL pundits being thirty-six percent right being really wonderful or–

DAVENPORT: I wouldn’t say that because there’s a specific fallacy these guys are operating from, which is they tend to rely much too heavily on the previous year’s standings in making their picks for the following year. They play it very conservatively. But there’s a very high level of parity in the NFL right now, so that’s not exactly how it works.

DUBNER: Tell me some of the pundits who whether by luck or brilliance and hard work turn out to be really, really good.

DAVENPORT: Sure. There are two guys from ESPN who are sort of far ahead of the field. One is Pat Yasinskas, and the other is John Clayton, who is pretty well known; he makes a lot of appearances on SportsCenter and he’s kind of a, nebbish-y professorial type. And they perform much better than everyone else because they’re excellent wild-card pickers. They’re the only people who have correctly predicted both wild card teams in a conference in a season. But they’re especially good because they actually play it much safer than everyone else.

DUBNER: Now you say that they are very good. Persuade me that they’re good and not lucky.

DAVENPORT: I can’t do that. There’s a luck factor involved in all of these predictions. For example, if you pick the Patriots in 2008 and Tom Brady gets injured, and they drop out of the playoffs, there’s very little you can do to predict that. So injuries will mess with prediction all the time. And other turnover rates in football that are sort of unpredictable. So there’s a luck factor to all of this.

DUBNER: So whether it’s football experts calling Sunday’s game or economists forecasting the economy, or political pundits looking for the next revolution, we’re talking about accuracy rates that barely beat a coin toss. But maybe all these guys deserve a break. Maybe it’s just inherently hard to predict the future of other human beings. They’re so malleable; so unpredictable! So how about a prediction where human beings are incidental to the main action?

Joe PRUSACKI: I’m Joe Prusacki and I am the Director of Statistics Division with USDA’s National Agricultural Statistics Service, or NASS for short.

DUBNER: You grew up on a farm, yeah?

PRUSACKI: Uh-huh: Yep, I grew up in–I always call it “deep southern” Illinois. I’m sitting here in Washington DC and where I grew up in Illinois is further south than where I’m sitting today. We raised…we had corn, soybeans and raised hogs.

DUBNER: You’ve heard of Anna Wintour, right? The fabled editor of Vogue magazine? Joe Prusacki is kinda like Anna Wintour for farmers. He puts out publications that are read by everyone who’s anyone in the industry — titles like “Acreage” and “Prospective Plantings” and “Crop Production.” Prusacki’s reports carry running forecasts of crop yields for cotton, soybeans, wheat and corn.

PRUSACKI: Most of the time our monthly forecasts are probably within I can guarantee you within five percent and most of the time I can say within two to three percent of the final. And someone would say that’s seems very good. But in the agricultural world, the users expect us to be much more precise in our forecasts.

DUBNER: So how does this work? How does the USDA forecast something as vast as the agricultural output of American farmers?

PRUSACKI: Like at the beginning of March, we will conduct a large survey of farmers and ranchers across the United States and sample size this time, this year was about 85,000.

DUBNER: The farmers are asked how many acres they plan to devote to each crop. Corn, let’s say. Then, in late July, the USDA sends out a small army of “enumerators” into roughly 1,900 cornfields in 10 states. These guys mark off plots of corn, 20 feet long by two rows across.

PRUSACKI: They’re randomly placed. We have randomly selected fields, in random location within field. So you may get a sample that’s maybe 20 paces into the field and 40 rows over and you may get one that’s 250 paces into the field and 100 rows over.

DUBNER: The enumerators look at every plant in that plot.

PRUSACKI: And then they’ll count what they see or anticipate to be ears based on looking at the plant.

DUBNER: A month later, they go back out again and check the cornstalks, check the ears.

PRUSACKI: Well, you could have animal loss, animal might chew the plant off, the plant may die. So all along we’re updating the number of plants, all along we’re updating the number of ears. The other thing we need, you need an estimate of ear weight or fruit weight.

DUBNER: So they go out again, cut off a bunch of ears and weigh them. But wait: still not done. After the harvest, there’s one more round of measurement.

PRUSACKI: Once the field is harvested, and the machine has gone through the field, the enumerator will go back out to the field, they’ll lay out another plot–just beyond the harvest area where we were–and they will go through and pick up off the ground any kernels that are left on the ground, pieces of ears of corn and such on the ground so we get a measure of harvest loss.

DUBNER: So this sounds pretty straightforward, right? Compared to predicting something like the political or economic future, estimating corn yield based on constant physical measurements of corn plants is pretty simple. Except for one thing. It’s called the weather. Weather remains so hard to predict in the long term that the USDA doesn’t even use forecasts; it uses historic averages instead.

DUBNER: So Joe, talk to me about what happened last year with the USDA corn forecast. You must have known this was coming from me. So the Wall Street Journal’s headline was: “USDA Flubs in Predicting Corn Crops.” Explain what happened.

PRUSACKI: Well, this is the weather factor that came into play. It turned out pretty hot and pretty dry in most of the growing region. And I had asked a few folks that are out and about in Iowa what happened. They said this is just a really strange year. We just don’t know. Now, when if someone says did we flub it? I don’t know. It was the forecast based on the information I had as for August 1. Now, September 1, I had a different set of information. October 1, I had a different set of information. Could we have did a better job?

DUBNER: A lot of people thought they could have. Last June, the USDA lowered its estimate of corn stockpiles; and in October, it cut its estimate of corn yield. After the first report, the price of corn spiked 9 percent. The second report? Another 6 percent. Joe Prusacki got quite a few e-mails:

PRUSACKI: OK, the first one is, this was: “Thanks a lot for collapsing the grain market today with your stupid…and the word is three letters, begins with an “a” and then it has two dollar signs … USDA report.

“As bad as the stench of dead bodies in Haiti must be, it can’t even compare to the foul stench of corruption emanating from our federal government in Washington DC.”

DUBNER: It strikes me that there’s room for trouble here in that your forecasts are used by a lot of different people who engage in a lot of different markets, and your research can move markets. I’m wondering what kind of bribes maybe come your way?

PRUSACKI: It’s interesting, I have people that call, we call them ‘fishersThey call maybe a day or two days before when we’re finishing our work and it’s like I tell them, I say, “Why do you do this? We’ve had this discussion before.” There’s a couple things, one I sign a confidentiality statement every year that says I shall not release any information before it’s due time or bad things happen. It’s a $100,000 fine or time in prison. It’s like the dollar fine, OK. It’s the prison part that bothers me!

DUBNER: But there’s got to be a certain price at which–so let’s say I offered you, I came to you and I said–Joe, $10 million for a 24-hour head start on the corn forecast.

PRUSACKI: I’m not going to do it. Trust me, somebody would track me down.

DUBNER: I hear you.

PRUSACKI: Again, the prison time, it bothers me.

DUBNER: All right, so Joe Prusacki probably can’t be bought. And the USDA is generally considered to do a pretty good job with crop forecasts. But: look how hard the agency has to work, measuring corn fields row by row, going back to look for animal loss and harvest loss. And still, its projection, which is looking only a few months into the future, can get thrown totally out of whack by a little stretch of hot, dry weather. That dry spell was essentially a random event, kind of like Tom Brady’s knee getting smashed. I hate to tell you this but the future is full of random events. That’s why it’s so hard to predict. That’s why it can be scary. Do we know this? Of course we know it. Do we believe it? Mmmmm.

Some scholars say that our need for prediction is getting worse — or, more accurately, that we get more upset now when the future surprises us. After all, as the world becomes more rational and routinized, we often know what to expect. I can get a Big Mac not only in New York but in Beijing, too — and they’ll taste pretty much the same. So when you’re used to that, and when things don’t go as expected — watch out.

Our species has been trying to foretell the future forever. Oracles and goat entrails and roosters pecking the dirt. The oldest religious texts are filled with prediction. I mean, look at the afterlife! What is that if not a prediction of the future? A prediction that, as far as I can tell, can never be categorically refuted or confirmed. A prediction so compelling that it remains all these years later a concept around which billions of people organize their lives. So what do you see when you gaze into the future? A yawning chasm of random events — or do you look for a neat pattern, even if no such pattern exists?

Nassim TALEB: It’s much more costly for someone to not detect a pattern.

DUBNER: That’s Nassim Taleb, the author of “Fooled By Randomness” and “The Black Swan.”

TALEB: It’s much costlier for us — as a race, to make the mistake of not seeing a leopard than having the illusion of pattern and imagining a leopard where there is none. And that error, in other words, mistaking the non-random for the random, which is what I call the “one-way bias.” Now that bias works extremely well, because what’s the big deal of getting out of trouble? It’s not costing you anything. But in the modern world, it is not quite harmless. Illusions of certainty makes you think that things that haven’t exhibited risk, for example the stock market, are riskless. We have the turkey problem — the butcher feeds the turkey for a certain number of days, and then the turkey imagines this is permanent.

DUBNER: “The butcher feeds the turkey and the turkey imagines this is permanent.” So you’ve got to ask yourself: who am I? The butcher? Or the turkey? Coming up: hedgehogs and foxes — and a prediction that does work. Here’s a hint: if you like this song, [MUSIC], you’ll probably like this one too: [MUSIC].

[UNDERWRITING]

ANNOUNCER: From American Public Media and WNYC, this is Freakonomics Radio.

DUBNER: Hey, guess what, Sunshine? Al Gore didn’t win Florida. Didn’t become president either. Try walking that one back. So we are congenital predictors, but our predictions are often wrong. What then? How do you defend your bad predictions? I asked Philip Tetlock what all those political experts said when he showed them their results. He had already stashed their excuses in a neat taxonomy:

TETLOCK: So, if you thought that Gorbachev for example, was a fluke, you might argue, well my understanding of the Soviet political system is fundamentally right, and the Soviet Politburo, but for some quirky statistical aberration of the Soviet Politburo would have gone for a more conservative candidate. Another argument might be, well I predicted that Canada would disintegrate, that Quebec would secede from Canada, and it didn’t secede, but the secession almost did succeed because there was a fifty point one percentage vote against secession, and that’s well within the margin of sampling error.

DUBNER: Are there others you want to name?

TETLOCK: Well another popular prediction is “off on timing.” That comes up quite frequently in the financial world as well. Many very sophisticated students of finance have commented on how hard it is, saying the market can stay irrational longer than you can stay liquid, I think is George Soros’s expression. So, “off on timing” is a fairly popular belief-system defense as well. And I predicted that Canada would be gone. And you know what? It’s not gone yet. But just hold on.

DUBNER: You answered very economically when I asked you what are the characteristics of a bad predictor; you used one word, dogmatismm. What are the characteristics, then, of a good one?

TETLOCK: Capacity for constructive self-criticism.

DUBNER: How does that self-criticism come into play and actually change the course of the prediction?

TETLOCK: Well, one sign that you’re capable of constructive self-criticism is that you’re not dumbfounded by the question: What would it take to convince you you’re wrong? If you can’t answer that question you can take that as a warning sign.

DUBNER: In his study, Tetlock found that one factor was more important than any other in someone’s predictive ability: cognitive style. You know the story about the fox and the hedgehog?

TETLOCK: Isaiah Berlin tells us that the quotation comes from the Greek warrior poet Archilichus 2,500 years ago. And the rough translation was the fox knows many things but the hedgehog knows one big thing.

DUBNER: So, talk to me about what the foxes do as predictors and what the hedgehogs do as predictors.

TETLOCK: Sure. The foxes tend to have a rather eclectic, opportunistic approach to forecasting. They’re very pragmatic. A famous aphorism by Deng Xiaoping was he “didn’t care if the cat was white or black as long as it caught mice.” And I think the attitude of many foxes is they really didn’t care whether ideas came from the left or the right, they tended to deploy them rather flexibly in deriving predictions. So they often borrowed ideas across schools of thought that hedgehogs viewed as more sacrosanct. There are many subspecies of hedgehog. But what they have in common is a tendency to approach forecasting as a deductive, top-down exercise. They start off with some abstract principles, and they apply those abstract principles to messy, real-world situations, and the fit is often decidedly imperfect.

DUBNER: So foxes tend to be less dogmatic than hedgehogs, which makes them better predictors. But, if you had to guess, who do you think more likely to show up TV or in an op-ed column, the pragmatic, nuanced fox or the know-it-all hedgehog?

[SOUND MONTAGE]

DUBNER: You got it!

TETLOCK: Hedgehogs, I think, are more likely to offer quotable sound bites, whereas foxes are more likely to offer rather complex, caveat-laden sound bites. They’re not sound bites anymore if they’re complex and caveat-laden.

DUBNER: So, if you were to gain control of let’s say a really big media outlet, New York Times, or NBC TV, and you said, you know, I want to dispense a different kind of news and analysis to the public, what would you do? How would you suggest building a mechanism to do a better job of keeping all this kind of poor expert prediction out of the, off the airwaves.

TETLOCK: I’m so glad you asked that question. I have some specific ideas about that. And I don’t think they would be all that difficult to implement. I think they should try to keep score more. I think there’s remarkably little effort in tracking accuracy. If you happen to be someone like Tom Friedman or Paul Krugman, or someone who’s at the top of the pundit pecking order, there’s very little incentive for you to want to have your accuracy tested because your followers are quite convinced that you’re extremely accurate, and it’s pretty much a game you can only lose.

DUBNER: Can you imagine? Every time a pundit appeared on TV, the network would list his batting average, right after his name and affiliation. You think that might cut down on blowhard predictions just a little bit? Looking back at what we’ve learned so far, it makes me wonder: maybe the first step toward predicting the future should be to acknowledge our limitations. Or–at the very least–let’s start small. For instance: if I could tell you what kind of music I like, and then you could predict for me some other music I’d want to hear. That actually already exists. It’s called Pandora Radio. Here’s co-founder Tim Westergren.

Tim WESTERGREN: So, what we’ve done is, we’ve broken down recordings into their basic components for every dimension of melody, harmony, and rhythm, and form, and instrumentation, down into kind of the musical equivalent of primary colors.

DUBNER: The Pandora database includes more than a million songs, across every genre that you or I could name. Each song is broken down into as many as 480 musical attributes, almost like genetic code. Pandora’s organizing system is in fact called the “Music Genome Project.” You tell the Pandora website a song you like, and it rummages through that massive genetic database to make an educated guess about what you want to hear next. If you like that song, you press the thumbs-up button, and Pandora takes note.

WESTERGREN: I wouldn’t make the claim that Pandora can map your emotional persona. And I also don’t think frankly that Pandora can predict a hit because I think it is very hard, it’s a bit of a magic, that’s what makes music so fantastic. So, I think that we know our limitations, but within those limitations I think that we make it much, much more likely that you’re going to find that song that just really touches you.

DUBNER: So Tim, you were good enough to set up a station for me here. It’s called “Train in Vain Radio.” So the song we gave you was “Train in Vain.” So let me open up my radio station here and I’ll hit play and see what you got for me.

[MUSIC PLAYS]

DUBNER: Oh yeah. Yeah I like them, that’s The Jam, so I’m going to give it a thumbs up I like “Town Called Malice.” .on my little window here. I think there are a couple more songs in my station here.

[MUSIC PLAYS]

“Television” by Tom Verlaine, he was always too cool for me. I can see why you would think that I would like them, and I appreciate your effort, Mr. Pandora. How about you, were you a “Television” fan?

WESTERGREN: Yeah, yeah. And you know, one thing of course is that the songs are all rooted in guitar riffs.

DUBNER: Yep.

WESTERGREN: There’s a repetitive motif played on the guitar. And a similar sound and they’ve got a little twang– and they’re played kind of rambly, a little bit rough, there’s a sort of punk element in there. The vocals have over twenty attributes just for the voice. In this case these are pretty unpolished vocal deliveries.

DUBNER: I got to tell you that even though when this song came up, and I’ve heard this song a few times, and I told you I didn’t like Television very much, this song, I’m kind of digging it now.

WESTERGREN: See, there you go, that’s exactly what we’re trying to do.

DUBNER: So, it’s a really great thing to do, but it’s not really predicting the future the way most people think of it as predicting the future, is it?

WESTERGREN: Well, I certainly wouldn’t have put our mission in the same category as predicting the economy, or, you know, geopolitical futures. But you know, the average American listens to 17 hours of music a week. So, they spend a lot of time doing it, and I think that if we can make that a more enjoyable experience and more personalized, I think maybe we’ll make some kind of meaningful contribution to culture.

DUBNER: So Pandora does a pretty good job of predicting the music you might want to hear, based on what you already know you like. But again, look how much effort that takes — 480 musical attributes! And it’s not really predicting the future, is it? All Pandora does is breaks down the confirmed musical preferences of one person today and comes up with some more music that’ll fulfill that same person’s preferences tomorrow. If we really want to know the future, we probably need to get much more ambitious. We probably need a whole new model. Like, how about prediction markets?

Robin HANSON: A prediction market is basically like a betting market or a speculative market, like orange juice futures or stock markets, things like that. The mechanics is that there’s a — an asset of some sort that pays off if something’s true, like whether a, a person wins the presidency or a team wins a sporting contest. And people trade that asset and the price of that asset becomes then a forecast of whether that claim is likely to be true.

DUBNER: That’s Robin Hanson, an economics professor at George Mason University and an admitted advocate of prediction markets. As Hanson sees it, a prediction market is far more reliable than other forecasting methods because it addresses the pesky incentive problems of the old-time prediction industry.

HANSON: So a prediction market gives people an incentive, a clear personal incentive to be right and not wrong. Equally important, it gives people an incentive to shut up when they don’t know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and and ask them vaguely related questions, they’ll typically try to answer them, just because they want to be heard. But in a prediction market most people don’t speak up. Every one of your listeners today had the right to go speak up on orange juice futures yesterday. Every one of you could have gone and said, orange juice futures forecasts are too low or too high, and almost no one did. Why? Because most of you don’t think you know. And that’s just the way we want it.So in most of these prediction markets what we want is the few people who know the best to speak up and everybody else to shut up.

DUBNER: Prediction markets are flourishing. Some of them are private — a multinational firm might set up an internal market to try to forecast when a big project will be done. And there are for-profit prediction markets like InTrade, based in Dublin, where you can place a bet on, say, whether any country that currently uses the Euro will drop the Euro by the end of the year. (As I speak, that bet has a 15% chance on InTrade.) Here’s another InTrade bet: whether there’ll be a successful WMD terrorist attack anywhere in the world by the end of 2013. (That’s got a 28% chance.) Now that’s starting to sound a little edgy, no? Betting on terrorism? Robin Hanson himself has a little experience in this area, on a U.S. government project he worked on.

HANSON: All right, so — back in 2000, DARPA, the Defense Advanced Research Projects Agency, had heard about prediction markets, and they decided to fund a research project. And they basically said, listen, we’ve heard this is useful for other things, we’d like you to show us that this can be useful for the kind of topics we are interested in. Our project was going to be forecasting geopolitical trends in the Middle East. We were going to show that prediction markets could tell you about economic growth, about riots, about perhaps wars, about whether the changes of heads of state… and how these things would interact with each other.

DUBNER: In 2003, just as the project was about to go live, the press heard about it.

HANSON: On Monday morning two senators had a press conference where they declared that the — DARPA, the — and the military were going to have a betting market on terrorism.

HANSON: And so, there was a sudden burst of media coverage and by the very next morning the head of the military basically declared before the Senate that this project was dead, and there was nothing more to worry about.

DUBNER: What do you think you — we collectively, you, in particular — would know now about that part of the world, let’s say, if this market had been allowed to take root?

HANSON: Well, I think we would have gotten much earlier warning about the revolutions we just had. And if we would have had participants from the Middle East forecasting those markets. Not only we would get advanced warning about which things might happen, but then how our actions could affect those. So, for example, the United States just came in on the side of the Libyan rebels, to support the Libya rebels against the Qaddafi regime. What’s the chances that will actually help the situation, as opposed to make it worse?

DUBNER: But give me an example of what you consider among the hardest problems that a prediction market could potentially help solve?

HANSON: Who should — not only who should we elect for president but whether we should go to war here or whether we should begin this initiative? Or should we approve this reform bill for medicine, etc.

DUBNER: So that sounds very logical, very appealing. How realistic is it?

HANSON: Well, it depends on there being a set of customers who want this product. So, you know, if prediction markets have an Achilles heel, it’s certainly the possibility that people don’t really want accurate forecasts.

DUBNER: Prediction markets put a price on accountability. If you’re wrong, you pay, simple as that. Just like the proposed law against the witches in Romania. Maybe that’s what we need more of. Here’s Steve Levitt again:

LEVITT: When there are big rewards to people who make predictions and get them right, and there are zero punishments for people who make bad predictions because they’re immediately forgotten, then economists would predict that’s a recipe for getting people to make predictions all the time.

DUBNER: Because the incentives are all encouraging you to make predictions.

LEVITT: Absolutely.

DUBNER: If you get it right there’s an upside, and if you get it wrong there’s almost no downside.

LEVITT: Right, if the flipside were that if I make a false prediction I’m immediately sent to prison for a one-year term, there would be almost no prediction.

DUBNER: And all those football pundits and political pundits and financial pundits wouldn’t be able to wriggle out of their bad calls — saying “My idea was right, but my timing was wrong.” Maybe that’s how everybody does it. That big storm the weatherman called but never showed up? “Oh, it happened all right,” he says, “but two states over.” Or how about those predictions for the End of the World — the Apocalypse, the Rapture, all that? “Well,” they say, “we prayed so hard that God decided to spare us.”

Remember back in May, when an 89-year-old preacher named Harold Camping declared that the Earth would be destroyed at 5:59 p.m. on a Saturday, and only the true believers would survive? I remember it very well because my 10-year-old son was petrified. I tried telling him that Camping was a kook — that anybody can say pretty much anything they want about the future. It didn’t help; he couldn’t get to sleep at night.

And then the 21st came and went and he was psyched. “I knew it all along, Dad,” he said.

Then I asked him what he thought should happen to Harold Camping, the false Doomsday prophet. “Oh, that’s easy,” he said. “Off with his head!”

My son is not a bloodthirsty type. But he’s not a turkey either.

Should Bad Predictions Be Punished? (Freakonomics.com)

SUZIE LECHTENBERG

08/09/2011 | 8:33 pm

Government corn predictions are based on the work of people like Phil Friedrichs, gathering data in a corn field in Hiawatha, Kansas. (Photo: Stephen Koranda)

What do Wall Street forecasters and Romanian witches have in common? They usually get away, scot-free, with making bad predictions. Our world is awash in poor prediction — but for some reason, we can’t stop, even though accuracy rates often barely beat a coin toss.

But then there’s the U.S. Department of Agriculture’s crop forecasting. Predictions covering a big crop like corn (U.S. farmers have planted the second largest crop since WWII this year) usually fall within five percent of the actual yield. So how do they do it? Every year, the U.S.D.A. sends thousands of enumerators into cornfields across the country where they inspect the plants, the conditions, and even “animal loss.”

This week on Marketplace, Stephen J. Dubner and Kai Ryssdal talk about the supply and demand of predictions. You’ll hear from Joseph Prusacki, the head of U.S.D.A’s Statistics Division, who’s gearing up for his first major crop report of 2011 (the street is already “sweating” it); Phil Friedrichs, who collects cornfield data for the USDA; and our trusted economist and Freakonomics co-author Steven Levitt.

We’ll also hear from journalist Vlad Mixich in Bucharest, who tells us why those Romanian witchesmight not be getting away with bad fortune telling for much longer.