Arquivo da tag: Incerteza

HISTORIAS OLVIDADAS DE BUENOS AIRES: UN HOMBRE DECIA HABER INVENTADO LA MAQUINA DE LA LLUVIA

Sucedió el 2 de enero de 1939, cuando un ingeniero llamado Juan Baigorri le aseguró al director de Meteorología que haría llover sobre la ciudad. Y llovió.

Héctor Gambini. DE LA REDACCION DE CLARIN.

Lunes 17.06.2002

“Como respuesta a la censura a mi procedimiento, regalo —por intermedio de Crítica— una lluvia a Buenos Aires para el 2 de enero de 1939″. La frase salió en el diario a fines del 38 y era un desafío público al director de Meteorología Nacional, para quien el autor de los dichos no era más que un embustero. Un ingeniero provocador que decía haber inventado la máquina de hacer llover.

Cuando llegó el 1° de enero, los porteños tenían el desafío tan presente que chocaban copas de madrugada con los ojos clavados en el cielo limpio. El día fue tan caluroso y húmedo que hasta la tarea de sentarse bajo la parra a mirar las nubes raquíticas que pasaban por Buenos Aires resultaba un entretenimiento cansador. Pero llegó la noche y nada.

En la mañana del 2, la ciudad volvió al trabajo. Y nada. Ni rastros de la lluvia. Pero no había viento ni para mover un pétalo de rosa. Y las nubecitas blancas y enfermizas de la tarde anterior iban echando cuerpo y color. Primero grises plomo. Después virando hacia el negro. Cada vez más. Hasta que una brisa de suspiro apareció de la nada con un aliento de humedad en suspensión. Gotitas sin peso ni para llegar al suelo. Y otras gotitas finas detrás, que ya tocaban el asfalto. Y otras gordas como ñoquis, que ahora hacían dibujos en los charcos incipientes. Enseguida,tormenta eléctrica y chaparrón violento. Una catarata que caía del cielo mientras Crítica paraba las rotativas para salir al mediodía con el título principal de la quinta edición, en tipografía catástrofe: “Como lo pronosticó Baigorri, hoy llovió”, debajo de una volanta que daba información acerca de lo que acababa de ocurrir en Buenos Aires:“Baigorri consiguió que tres millones de personas dirijan sus miradas al cielo”.

El tal Baigorri había nacido en Entre Ríos a fines del siglo anterior. Hijo de un militar amigo del general Roca, llegó a Buenos Aires para hacer la secundaria en el Colegio Nacional. Cuando egresó viajó a Italia para estudiar geofísica y se recibió de ingeniero en la Universidad de Milán.

En esos años —principios de la década del 30— comenzó a viajar por el mundo, contratado por diferentes petroleras. Estuvo en diversos países de Europa, Asia y Africa. Y también en Estados Unidos, desde donde volvió contratado por YPF.

Con su mujer y su hijo se instaló en Caballito. Junto a sus bultos de familia hizo trasladar desde el aeropuerto un aparato con antenas expandibles, que guardó celosamente en un placard. “Más o menos estoy adaptado a Buenos Aires, pero hay mucha humedad”, se quejaba.

Una mañana se decidió. Tomó unos aparatos y los utilizó para ir midiendo la humedad por los barrios porteños. Se paró frente a una casa de Araujo y Falcón, en Villa Luro. Las agujas le indicaban que era la zona más alta de cuanto había recorrido. Compró esa casa, que tenía un altillo perfecto para un laboratorio.

Allí se fue “desarrollando” la función de la extraña máquina, un artefacto que, a los dichos de Baigorri, provocaba que el cielo rompiese en lluvia cada vez que la encendiera. Según él, ocurría por un mecanismo de electromagnetismo que concentraba nubes en el área de influencia del aparato.

Era 1938 y los diarios hablaban de los recientes suicidios de Leopoldo Lugones y Alfonsina Storni. Y de los fraudes en las elecciones parlamentarias que ponían al presidente Roberto Ortiz al borde de la renuncia. River inauguraba el Monumental.

Baigorri buscaba demostrar que podía manejar la lluvia y buscó el patrocinio del Ferrocarril Central Argentino. El gerente inglés oyó la propuesta y sonrió, malicioso. “¿Y usted podría hacerlo en cualquier lugar?”, preguntó, tropezando con las palabras en español. Baigorri contestó que sí, y el inglés desafió, sarcástico: “Bueno, haga llover en Santiago del Estero”.

Hacia allí salió el ingeniero, con su extraña máquina y un perito agrónomo de acompañante, que viajaba para controlarlo. A los pocos días volvieron y el perito certificó que, en una estancia de una localidad llamada Estación Pinto, Baigorri se puso a trabajar y a las ocho horas llovió.

Su fama comenzó a crecer y llegó con él, en tren, a Buenos Aires. Hasta viajaron dos periodistas de The Times, de Londres, para entrevistarlo. En el otro rincón, el ingeniero Calmarini, director de Meteorología, salió a decir que todo era un invento infame o, a lo sumo, obra de la casualidad.

Aprovechando la polémica y con el tema instalado en la calle, Crítica fue a entrevistar a Baigorri. De allí salió el desafío para el 2 de enero. Ante el silencio de Meteorología, el ingeniero subió la apuesta: le mandó al funcionario nacional un paraguas de regalo . Junto al bulto, una tarjeta:“Para que lo use el 2 de enero”. Fue el día en que los porteños se desvelaron para mirar el cielo, esperando la lluvia.

Baigorri comenzó a viajar por el interior y a “hacer llover” con su máquina en diferentes localidades, con suerte dispar.

En 1951 fue asesor ad honórem del Ministerio de Asuntos Técnicos. Al año siguiente desempolvó su viejo invento y viajó a La Pampa. Llegó, encendió la batería y empezó a llover, aunque ya la gente dudaba de sus méritos:“Iba a llover de todos modos”, decían.

Baigorri se recluyó en un largo silencio. Ya viudo, pasaba horas en el altillo de Villa Luro. Leonor, la mujer que hoy vive en esa casa, contó a Clarín:“Cada vez que llovía la gente rodeaba la casa y se ponía a mirar hacia el altillo”. Allí mismo Baigorri se negó a atender a un emisario que decía venir en nombre de un empresario norteamericano para comprarle la fórmula. “Mi invento es argentino y será para exclusivo beneficio de los argentinos”, le contestó.

Anciano y solo, vendió la casa y se mudó a lo de un amigo francés, que le prestó una habitación en un departamento. Murió en el otoño de 1972, hace justo 30 años. Tenía 81 y había llegado al hospital solo, con problemas en los bronquios.

Nadie más supo de la extraña máquina de las antenas. Ni si Baigorri dejó un sucesor secreto para que la activara como homenaje durante su propio sepelio: cuando lo estaban enterrando, en el cementerio de la Chacarita, se largó a llover. 

MIT Predicts That World Economy Will Collapse By 2030 (POPSCI)

By Rebecca Boyle – Posted 04.05.2012 at 4:30 pm

Crowds and Haze in Shanghai Jeremy Vandel via Flickr

Forty years after its initial publication, a study called The Limits to Growth is looking depressingly prescient. Commissioned by an international think tank called the Club of Rome, the 1972 report found that if civilization continued on its path toward increasing consumption, the global economy would collapse by 2030. Population losses would ensue, and things would generally fall apart.

The study was — and remains — nothing if not controversial, with economists doubting its predictions and decrying the notion of imposing limits on economic growth. Australian researcher Graham Turner has examined its assumptions in great detail during the past several years, and apparently his latest research falls in line with the report’s predictions, according to Smithsonian Magazine. The world is on track for disaster, the magazine says.

The study, initially completed at MIT, relied on several computer models of economic trends and estimated that if things didn’t change much, and humans continued to consume natural resources apace, the world would run out at some point. Oil will peak (some argue it has) before dropping down the other side of the bell curve, yet demand for food and services would only continue to rise. Turner says real-world data from 1970 to 2000 tracks with the study’s draconian predictions: “There is a very clear warning bell being rung here. We are not on a sustainable trajectory,” he tells Smithsonian.

Is this impossible to fix? No, according to both Turner and the original study. If governments enact stricter policies and technologies can be improved to reduce our environmental footprint, economic growth doesn’t have to become a market white dwarf, marching toward inevitable implosion. But just how to do that is another thing entirely.

[Smithsonian]

The ‘perfect chaos’ of π (The Guardian)

One of the most important numbers is irrational

GRRLSCIENTIST, by The Guardian

π has fascinated mathematicians, engineers and other people for centuries. It is a mathematical constant that is the ratio of a circle’s circumference (C) to its diameter (d);

This also explains why and how this number got its name: the lowercase Greek letter π was first adopted in 1706 as an abbreviation for this number because it is the first letter of the Greek for “perimeter”, specifically of a circle. This symbol is convenient because π is an irrational number, meaning that it cannot be expressed as a ratio of a/b, where a and b are integers, that its digits never terminate, and it does not contain an infinitely repeating sequence.

Even though we know that the decimal for π is approximately 3.14159, we actually do not know all its digits precisely: as of October 2011, we know that π has more than 10 trillion non-repeating digits, and the occurrence of these digits appears to be nearly perfectly statistically random. However, we do know that any given sequence of numbers with a finite length has a 100% probability that it will occur somewhere in π — which is the premise of this fun little π search engine. For example, my 8-digit university student ID number pops up after 3.24 million decimal places. My mobile number pops up after 9.69 million decimal places, although it does not show up within the first 200 million digits of π when I add the country and area codes. Where do your digits pop up in π?

Many formulae in mathematics, science, and engineering involve π, which makes it one of the most important mathematical constants. But who first rigorously calculated the value for this irrational number and how was it done? This interesting video explores those questions in more detail:

Those of you who enjoy music probably already know that there’s a song about π by the amazing British singer and songwriter, Kate Bush, where she sings its digits.

New Understanding to Past Global Warming Events: Hyperthermal Events May Be Triggered by Warming (Science Daily)

These geological deposits make the Bighorn Basin area of Wyoming ideal for studying the PETM. (Credit: Aaron Diefendorf)

ScienceDaily (Apr. 2, 2012) — A series of global warming events called hyperthermals that occurred more than 50 million years ago had a similar origin to a much larger hyperthermal of the period, the Pelaeocene-Eocene Thermal Maximum (PETM), new research has found. The findings, published in Nature Geoscience online on April 1, 2012, represent a breakthrough in understanding the major “burp” of carbon, equivalent to burning the entire reservoir of fossil fuels on Earth, that occurred during the PETM.

“As geologists, it unnerves us that we don’t know where this huge amount of carbon released in the PETM comes from,” says Will Clyde, associate professor of Earth sciences at the University of New Hampshire and a co-author on the paper. “This is the first breakthrough we’ve had in a long time. It gives us a new understanding of the PETM.” The work confirms that the PETM was not a unique event – the result, perhaps, of a meteorite strike – but a natural part of Earth’s carbon cycle.

Working in the Bighorn Basin region of Wyoming, a 100-mile-wide area with a semi-arid climate and stratified rocks that make it ideal for studying the PETM, Clyde and lead author Hemmo Abels of Utrecht University in the Netherlands found the first evidence of the smaller hyperthermal events on land. Previously, the only evidence of such events were from marine records.

“By finding these smaller hyperthermal events in continental records, it secures their status as global events, not just an ocean process. It means they are atmospheric events,” Clyde says.

Their findings confirm that, like the smaller hyperthermals of the era that released carbon into the atmosphere, the release of carbon in the PETM had a similar origin. In addition, the warming-to-carbon release of the PETM and the other hyperthermals are similarly scaled, which the authors interpret as an indication of a similar mechanism of carbon release during all hyperthermals, including the PETM.

“It points toward the fact that we’re dealing with the same source of carbon,” Clyde says.

Working in two areas of the Bighorn Basin just east of Yellowstone National Park – Gilmore Hill and Upper Deer Creek – Clyde and Abels sampled rock and soil to measure carbon isotope records. They then compared these continental recordings of carbon release to equivalent marine records already in existence.

During the PETM, temperatures rose between five and seven degrees Celsius in approximately 10,000 years — “a geological instant,” Clyde calls it. This rise in temperature coincided exactly with a massive global change in mammals, as land bridges opened up connecting the continents. Prior to the PETM, North America had no primates, ancient horses, or split-hoofed mammals like deer or cows.

Scientists look to the PETM for clues about the current warming of Earth, although Clyde cautions that “Earth 50 million years ago was very different than it is today, so it’s not a perfect analog.” While scientists still don’t fully understand the causes of these hyperthermal events, “they seem to be triggered by warming,” Clyde says. It’s possible, he says, that less dramatic warming events destabilized these large amounts of carbon, releasing them into the atmosphere where they, in turn, warmed the Earth even more.

“This work indicates that there is some part of the carbon cycle that we don’t understand, and it could accentuate global warming,” Clyde says.

The Social Sciences’ ‘Physics Envy’ (N.Y.Times)

OPINION – GRAY MATTER

Jessica Hagy

By KEVIN A. CLARKE AND DAVID M. PRIMO

Published: April 01, 2012

HOW scientific are the social sciences?

Economists, political scientists and sociologists have long suffered from an academic inferiority complex: physics envy. They often feel that their disciplines should be on a par with the “real” sciences and self-consciously model their work on them, using language (“theory,” “experiment,” “law”) evocative of physics and chemistry.

This might seem like a worthy aspiration. Many social scientists contend that science has a method, and if you want to be scientific, you should adopt it. The method requires you to devise a theoretical model, deduce a testable hypothesis from the model and then test the hypothesis against the world. If the hypothesis is confirmed, the theoretical model holds; if the hypothesis is not confirmed, the theoretical model does not hold. If your discipline does not operate by this method – known as hypothetico-deductivism – then in the minds of many, it’s not scientific.

Such reasoning dominates the social sciences today. Over the last decade, the National Science Foundation has spent many millions of dollars supporting an initiative called Empirical Implications of Theoretical Models, which espouses the importance of hypothetico-deductivism in political science research. For a time, The American Journal of Political Science explicitly refused to review theoretical models that weren’t tested. In some of our own published work, we have invoked the language of model testing, yielding to the pressure of this way of thinking.

But we believe that this way of thinking is badly mistaken and detrimental to social research. For the sake of everyone who stands to gain from a better knowledge of politics, economics and society, the social sciences need to overcome their inferiority complex, reject hypothetico-deductivism and embrace the fact that they are mature disciplines with no need to emulate other sciences.

The ideal of hypothetico-deductivism is flawed for many reasons. For one thing, it’s not even a good description of how the “hard” sciences work. It’s a high school textbook version of science, with everything messy and chaotic about scientific inquiry safely ignored.

A more important criticism is that theoretical models can be of great value even if they are never supported by empirical testing. In the 1950s, for instance, the economist Anthony Downs offered an elegant explanation for why rival political parties might adopt identical platforms during an election campaign. His model relied on the same strategic logic that explains why two competing gas stations or fast-food restaurants locate across the street from each other – if you don’t move to a central location but your opponent does, your opponent will nab those voters (customers). The best move is for competitors to mimic each other.

This framework has proven useful to generations of political scientists even though Mr. Downs did not empirically test it and despite the fact that its main prediction, that candidates will take identical positions in elections, is clearly false. The model offered insight into why candidates move toward the center in competitive elections, and it proved easily adaptable to studying other aspects of candidate strategies. But Mr. Downs would have had a hard time publishing this model today.

Or consider the famous “impossibility theorem,” developed by the economist Kenneth Arrow, which shows that no single voting system can simultaneously satisfy several important principles of fairness. There is no need to test this model with data – in fact, there is no way to test it – and yet the result offers policy makers a powerful lesson: there are unavoidable trade-offs in the design of voting systems.

To borrow a metaphor from the philosopher of science Ronald Giere, theories are like maps: the test of a map lies not in arbitrarily checking random points but in whether people find it useful to get somewhere.

Likewise, the analysis of empirical data can be valuable even in the absence of a grand theoretical model. Did the welfare reform championed by Bill Clinton in the 1990s reduce poverty? Are teenage employees adversely affected by increases in the minimum wage? Do voter identification laws disproportionately reduce turnout among the poor and minorities? Answering such questions about the effects of public policies does not require sweeping theoretical claims, just careful attention to the data.

Unfortunately, the belief that every theory must have its empirical support (and vice versa) now constrains the kinds of social science projects that are undertaken, alters the trajectory of academic careers and drives graduate training. Rather than attempt to imitate the hard sciences, social scientists would be better off doing what they do best: thinking deeply about what prompts human beings to behave the way they do.

Kevin A. Clarke and David M. Primo, associate professors of political science at the University of Rochester, are the authors of “A Model Discipline: Political Science and the Logic of Representations.”

Conservatives’ Trust in Science at All-Time Low (Slate/L.A.Times)

A new study suggests a growing partisan divide as science plays an increasing role in policy debates.By  | Posted Thursday, March 29, 2012, at 1:29 PM ET

91275814
A new report suggests the number of conservatives who trust science is at an all-time low. Photo by Aude Guerrucci-Pool/Getty Images.

This may explain some of the rhetoric we’ve been hearing in GOP stump speeches of late: The number of conservatives who say they have a “great deal” of trust in science has fallen to 35 percent, down 28 points from the mid-1970s, according to a new academic paper.

The study, which was published Thursday in the American Sociological Review, found that liberal and moderate attitudes toward the topic have remained mostly unchanged since national pollsters first began posing the question in 1974, back when roughly half of all liberals and conservatives expressed significant trust in science.

The peer-reviewed research paper explains: “These results are quite profound because they imply that conservative discontent with science was not attributable to the uneducated but to rising distrust among educated conservatives.”

The man behind the study, UNC Chapel Hill’s Gordon Gauchat, says the change comes as conservatives have rebelled against the so-called “elite.”

“It kind of began with the loss of Barry Goldwater and the construction of Fox News and all these [conservative] think tanks. The perception among conservatives is that they’re at a disadvantage, a minority,” Gauchat explained in an interview with U.S. News. “It’s not surprising that the conservative subculture would challenge what’s viewed as the dominant knowledge production groups in society—science and the media.”

The sociologist suggested that the shift is also likely tied to science’s changing role in the national dialogue. In the middle of the 20th century, science was tied closely with NASA and the Department of Defense, but now it more frequently comes up when the conversation shifts to the environment and government regulations.

“Science has become autonomous from the government—it develops knowledge that helps regulate policy, and in the case of the EPA, it develops policy,” he said. “Science is charged with what religion used to be charged with—answering questions about who we are and what we came from, what the world is about. We’re using it in American society to weigh in on political debates, and people are coming down on a specific side.”

You can read a more of the interview at U.S. News, a more detailed recap of the the study over the Los Angeles Times, or check out the full paper here.

Conservatives’ trust in science has declined sharply

Since 1974, when conservatives had the highest trust in science, their confidence has dropped precipitously, an American Sociological Review study concludes.

By John Hoeffel – Los Angeles TimesMarch 29, 2012
As the Republican presidential race has shown, the conservatives who dominate the primaries are deeply skeptical of science — making Newt Gingrich, for one, regret he ever settled onto a couch with Nancy Pelosi to chat about global warming.A study released Thursday in the American Sociological Review concludes that trust in science among conservatives and frequent churchgoers has declined precipitously since 1974, when a national survey first asked people how much confidence they had in the scientific community. At that time, conservatives had the highest level of trust in scientists.

Confidence in scientists has declined the most among the most educated conservatives, the peer-reviewed research paper found, concluding: “These results are quite profound because they imply that conservative discontent with science was not attributable to the uneducated but to rising distrust among educated conservatives.”

“That’s a surprising finding,” said the report’s author, Gordon Gauchat, in an interview. He has a doctorate in sociology and is a postdoctoral fellow at the University of North Carolina at Chapel Hill.

To highlight the dramatic impact conservative views of science have had on public opinion, Gauchat pointed to results from Gallup, which found in 2012 that just 30% of conservatives believed the Earth was warming as a result of greenhouse gases versus 50% two years earlier. In contrast, the poll showed almost no change in the opinion of liberals, with 74% believing in global warming in 2010 versus 72% in 2008.

Gauchat suggested that the most educated conservatives are most acquainted with views that question the credibility of scientists and their conclusions. “I think those people are most fluent with the conservative ideology,” he said. “They have stronger ideological dispositions than people who are less educated.”

Chris Mooney, who wrote “The Republican War on Science,” which Gauchat cites, agreed. “If you think of the reasons behind this as nature versus nurture, all this would be nurture, that it was the product of the conservative movement,” he said. “I think being educated is a proxy for people paying attention to politics, and when they do, they tune in to Fox News and blogs.”

Gauchat also noted the conservative movement had expanded substantially in power and influence, particularly during the presidencies of Ronald Reagan and George W. Bush, creating an extensive apparatus of think tanks and media outlets. “There’s a whole enterprise,” he said.

Science has also increasingly come under fire, Gauchat said, because its cultural authority and its impact on government have grown. For years, he said, the role science played was mostly behind the scenes, creating better military equipment and sending rockets into space.

But with the emergence of the Environmental Protection Agency, for example, scientists began to play a crucial and visible role in developing regulations.

Jim DiPeso, policy director of Republicans for Environmental Protection, has been trying to move his party to the center on issues such as climate change, but he said many Republicans were wary of science because they believed it was “serving the agenda of the regulatory state.”

“There has been more and more resistance to accepting scientific conclusions,” he said. “There is concern about what those conclusions could lead to in terms of bigger government and more onerous regulation.”

The study also found that Americans with moderate political views have long been the most distrustful of scientists, but that conservatives now are likely to outstrip them.

Moderates are typically less educated than either liberals or conservatives, Gauchat said. “These folks are just generally alienated from science,” he said, describing them as the “least engaged and least knowledgeable about basic scientific facts.”

The study was based on results from the General Social Survey, administered between 1974 and 2010 by the National Opinion Research Center at the University of Chicago.

Gauchat, who has been studying public attitudes toward science for about eight years, has applied for a National Science Foundation grant to investigate why trust in science has waned. He plans to ask a battery of questions, including some focused on scientific controversies, such as those overvaccines and genetically modified foods, to try to understand what makes conservatives and moderates so distrustful.

“It’s not one simple thing,” he said.

john.hoeffel@latimes.com

Neela Banerjee in the Washington bureau contributed to this report.

Why The Future Is Better Than You Think (Reason.com)

Sharif Christopher Matar | March 15, 2012

Can a Masai Warrior in Africa today communicate better than Ronald Reagan could? If he’s on a cell phone, Peter Diamandis says he can.

Peter Diamandis is the founder and chairman of the X Prize Foundation, which offers big cash prizes “to bring about radical breakthroughs for the benefit of humanity.” Reason’s Tim Cavanaugh sat down to talk with Peter about his new book Abundance and why he think we live in an “incredible time”, but no one realizes it. Peter thinks that there are some powerful human forces combined with technological advancements that are transforming the world for the better.

“The challenge is that the rate of innovation is so fast…” Peter says, “the government can’t keep up with it.” If the government tries to play “catch up” with regulations and policy, the technology with just go overseas. Certain inovations in “food, water, housing, health, education is getting better and better.” Peter “hopes we are not going to be in a situation where, entrenched interests are preventing the consumer from having better health care.”

Filmed by Sharif Matar and Tracy Oppenheimer. Edited by Sharif Matar

Americans Listening to Politicians, Not Climate Scientists (Ars Technica/Wired)

By Scott K. Johnson, Ars Technica
February 27, 2012

US public opinion about climate change has been riding a roller coaster over the past decade. After signs of growing acceptance and emphasis around 2006 and 2007, a precipitous decline brought us back to where we started, with fully a quarter of the public not even thinking that the planet has warmed up. It’s not shocking that concerns about climate change would take a back seat to the economic recession, but that doesn’t explain why some are skeptical that global warming is even real.

Since economic turmoil does not extend to past temperature measurements, it seems clear that public acceptance of the data depends at least partly on something other than the data itself. So the natural question is — what’s driving public opinion? Why the big shifts? The answer to that question may hold the key to the US’ response to the changing climate.A recent study published in Climatic Change evaluates the impact of several potential opinion drivers: extreme weather events, public access to scientific information, media coverage, advocacy efforts, and the influence of political leaders. These are compared to a compilation of 74 surveys performed by six different organizations. The polls took place between 2002 and 2010, and provide a total of 84,000 responses. The researchers used all the questions that asked respondents to rate their concern about climate change to calculate a “climate change threat index” that could be tracked through time.

For extreme weather events, the researchers used NOAA’s Climate Extremes Index, which includes things like unusually high temperatures and precipitation events, as well as severe droughts. To evaluate public access to scientific information, they tracked the number of climate change papers published in Science, major assessments like the 2007 IPCC report, and climate change articles published in popular science magazines.

Similarly, media coverage was tracked with a simple count of stories appearing on broadcast evening news shows and in several leading periodicals. Advocacy was measured using a number of “major environmental” and “conservative magazines.” In addition, they captured the influence of Al Gore’s An Inconvenient Truth (a favorite target of climate contrarians) using the number of times it was mentioned in the New York Times.

Finally, they counted up congressional press releases, hearings, and votes on bills related to climate change. For comparison, they also looked at the influence of unemployment, GDP, oil prices, and the number of deaths associated with the wars in Iraq and Afghanistan.

The researchers compared each time series to their climate change threat index. They found no statistically significant correlation with extreme weather events, papers in Science(hardly shocking—when was the last time you found Science in the waiting room at the dentist’s?), or oil prices. There was a minor correlation with major scientific assessments.

While articles in popular science magazines and advocacy efforts (especially An Inconvenient Truth) appeared to have an effect, the impact of news media coverage came about because it is transmitting statements from political leaders, what the researchers refer to as “elite cues.” That’s where the meat of this story lies. Those elite cues were the most significant driver of public opinion, followed by economic factors.

The researchers note that around the time when public acceptance of climate change reached its peak, political bipartisanship on the subject also hit a high point. Republican Senator and (then) presidential candidate John McCain was pushing for climate legislation, and current presidential candidate Newt Gingrich filmed a commercial together with an unlikely partner — Democratic Congresswoman Nancy Pelosi — urging action.

And then things changed. The economy went pear-shaped and Republican rhetoric shifted into attack mode on climate science. Gingrich’s commercial with Pelosi offers one example — opposing candidates in the presidential race have used its mere existence as a weapon against him, and Gingrich has tried to distance himself, calling it “the dumbest thing I’ve done in the last four years.”

Flipping this around, it suggests that serious action on climate change depends on a healthy economy and bipartisan agreement among politicians. If that leaves you pondering a future connection between global warming legislation and icy conditions in hell, the cooperation in 2007 indicates it isn’t totally unthinkable.

In addition, recent polling has shown that acceptance of climate change is, once again, climbing among those who identify as moderate Republicans. It’s unclear how to interpret that in terms of this study’s conclusions. Is economic optimism having an impact, have Republican presidential candidates alienated moderates in the party, or is something totally different responsible?

While it’s certainly not surprising, it’s discouraging to see how little effect scientific outreach efforts and reports have had on public opinion. Even on simple questions like “Is there solid evidence that the Earth has warmed?” — it’s politicians that are driving public opinion, not scientists or the data they produce.

Image: Hurricane Ike in 2008. (NOAA)

The Inside Story on Climate Scientists Under Siege (Wired/The Guardian)

By Suzanne Goldenberg, The Guardian
February 17, 2012 |

It is almost possible to dismiss Michael Mann’s account of a vast conspiracy by the fossil fuel industry to harass scientists and befuddle the public. His story of that campaign, and his own journey from naive computer geek to battle-hardened climate ninja, seems overwrought, maybe even paranoid.

But now comes the unauthorized release of documents showing how a libertarian thinktank, the Heartland Institute, which has in the past been supported by Exxon, spent millions on lavish conferences attacking scientists and concocting projects to counter science teaching for kindergarteners.

Mann’s story of what he calls the climate wars, the fight by powerful entrenched interests to undermine and twist the science meant to guide government policy, starts to seem pretty much on the money. He’s telling it in a book out on March 6, The Hockey Stick and the Climate Wars: Dispatches From the Front Lines.

“They see scientists like me who are trying to communicate the potential dangers of continued fossil fuel burning to the public as a threat. That means we are subject to attacks, some of them quite personal, some of them dishonest.” Mann said in an interview conducted in and around State College, home of Pennsylvania State University, where he is a professor.

It’s a brilliantly sunny day, and the light snowfall of the evening before is rapidly melting.

Mann, who seems fairly relaxed, has just spoken to a full-capacity, and uniformly respectful and supportive crowd at the university.

It’s hard to square the surroundings with the description in the book of how an entire academic discipline has been made to feel under siege, but Mann insists that it is a given.

“It is now part of the job description if you are going to be a scientist working in a socially relevant area like human-caused climate change,” he said.

He should know. For most of his professional life has been at the center of those wars, thanks to a paper he published with colleagues in the late 1990s showing a sharp upward movement in global temperatures in the last half of the 20th century. The graph became known as the “hockey stick”.

If the graph was the stick, then its publication made Mann the puck. Though other prominent scientists, such as Nasa’s James Hansen and more recently Texas Tech University’s Katharine Hayhoe, have also been targeted by contrarian bloggers and thinktanks demanding their institutions turn over their email record, it’s Mann who’s been the favorite target.

He has been regularly vilified on Fox news and contrarian blogs, and by Republican members of Congress. The attorney general of Virginia, who has been fighting in the courts to get access to Mann’s email from his earlier work at the University of Virginia. And then there is the high volume of hate mail, the threats to him and his family.

“A day doesn’t go by when I don’t have to fend off some attack, some specious criticism or personal attack,” he said. “Literally a day doesn’t go by where I don’t have to deal with some of the nastiness that comes out of a campaign that tries to discredit me, and thereby in the view of our detractors to discredit the entire science of climate change.”

By now he and other climate scientists have been in the trenches longer than the U.S. army has been in Afghanistan.

And Mann has proved a willing combatant. He has not gone so far as Hansen, who has been arrested at the White House protesting against tar sands oil and in West Virginia protesting against coal mining. But he spends a significant part of his working life now blogging and tweeting in his efforts to engage with the public – and fending off attacks.

On the eve of his talk at Penn State, a coal industry lobby group calling itself the Common Sense Movement/Secure Energy for America put up a Facebook page demanding the university disinvite their own professor from speaking, and denouncing Mann as a “disgraced academic” pursuing a radical environmental agenda. The university refused. Common Sense appeared to have dismantled the Facebook page.

But Mann’s attackers were merely regrouping. A hostile blogger published a link to Mann’s Amazon page, and his opponents swung into action, denouncing the book as a “fairy tale” and climate change as “the greatest scam in human history.”

It was not the life Mann envisaged when he began work on his post-graduate degree at Yale. All Mann knew then was that he wanted to work on big problems, that resonated outside academia. At heart, he said, he was like one of the amiable nerds on the television show Big Bang Theory.

“At that time I wanted nothing more than just to bury my head in my computer and study data and write papers and write programs,” he said. “That is the way I was raised. That is the culture I came from.”

What happened instead was that the “hockey stick” graph, because it so clearly represented what had happened to the climate over the course of hundreds of years, itself became a proxy in the climate wars. (Mann’s reconstruction of temperatures over the last millennium itself used proxy records from tree rings and coral).

“I think because the hockey stick became an icon, it’s been subject to the fiercest of attacks really in the whole science of climate change,” he said.

The U.N.’s Intergovernmental Panel on Climate Change produced a poster-sized graph for the launch of its climate change report in 2001.

Those opposed to climate change began accusing Mann of overlooking important data or even manipulating the records. None of the allegations were ever found to have substance. The hockey stick would eventually be confirmed by more than 10 other studies.

Mann, like other scientists, was just not equipped to deal with the media barrage. “It took the scientific community some time I think to realize that the scientific community is in a street fight with climate change deniers and they are not playing by the rules of engagement of science. The scientific community needed some time to wake up to that.”

By 2005, when Hurricane Katrina drew Americans’ attention to the connection between climate change and coastal flooding, scientists were getting better at making their case to the public. George Bush, whose White House in 2003 deleted Mann’s hockey stick graph from an environmental report, began talking about the need for biofuels. Then Barack Obama was elected on a promise to save a planet in peril.

But as Mann lays out in the book, the campaign to discredit climate change continued to operate, largely below the radar until November 2009 when a huge cache of email from the University of East Anglia’s Climatic Research Unit was released online without authorization.

Right-wing media and bloggers used the emails to discredit an entire body of climate science. They got an extra boost when an embarrassing error about melting of Himalayan glaciers appeared in the U.N.’s IPCC report.

Mann now admits the climate community took far too long to realize the extent of the public relations debacle. Aside from the glacier error, the science remained sound. But Mann said now: “There may have been an overdue amount of complacency among many in the scientific community.”

Mann, who had been at the center of so many debates in America, was at the heart of the East Anglia emails battle too.

Though he has been cleared of any wrongdoing, Mann does not always come off well in those highly selective exchanges of email released by the hackers. In some of the correspondence with fellow scientists, he is abrupt, dismissive of some critics. In our time at State College, he mentions more than once how climate scientists are a “cantankerous” bunch. He has zero patience, for example, for the polite label “climate skeptic” for the network of bloggers and talking heads who try to discredit climate change.

“When it comes to climate change, true skepticism is two-sided. One-sided skepticism is no skepticism at all,” he said. “I will call people who deny the science deniers … I guess I won’t be deterred by the fact that they don’t like the use of that term and no doubt that just endears me to them further.”

“It’s frustrating of course because a lot of us would like to get past this nonsensical debate and on to the real debate to be had about what to do,” he said.

But he said there are compensations in the support he gets from the public. He moves over to his computer to show off a web page: I ❤ climate scientists. He’s one of three featured scientists. “It only takes one thoughtful email of support to offset a thousand thoughtless attacks,” Mann said.

And although there are bad days, he still seems to believe he is on the winning side.

Across America, this is the third successive year of weird weather. The U.S. department of agriculture has just revised its plant hardiness map, reflecting warming trends. That is going to reinforce scientists’ efforts to cut through the disinformation campaign, Mann said.

“I think increasingly the campaign to deny the reality of climate change is going to come up against that brick wall of the evidence being so plain to people whether they are hunters, fishermen, gardeners,” he said.

And if that doesn’t work then Mann is going to fight to convince them.

“Whether I like it or not I am out there on the battlefield,” he said. But he believes the experiences of the last decade have made him, and other scientists, far better fighters.

“Those of us who have had to go through this are battle-hardened and hopefully the better for it,” he said. “I think you are now going to see the scientific community almost uniformly fighting back against this assault on science. I don’t know what’s going to happen in the future, but I do know that my fellow scientists and I are very ready to engage in this battle.”

Video: James West, The Climate Desk

Original story at The Guardian.

Newly Discovered Space Rock Is Headed Toward Earth, Estimated Time of Arrival 2040 (POPSCI.com)

The UN is figuring out how to ward off a potential collision

By Clay Dillow
Posted 02.27.2012 at 1:34 pm

Earth, and the Near-Earth Objects that Threaten It ESA – P.Carril

All eyes are on the asteroid Apophis, but a new threat–just 460 feet wide–dominated the conversation at a recent meeting of the UN Action Team on near-Earth objects (NEOs). Known as 2011 AG5, the asteroid could well be on a collision course with Earth in 2040, and some are already calling on scientists to figure out how to deflect it.

Discovered early last year, 2011 AG5 is still somewhat of a mystery to astronomers, as they have a pretty good idea how big it is but have only been able to observe it for roughly half an orbit. That makes it difficult to project the object’s path over time–and to verify whether it may be a threat in 2040. Ideally, researchers would like to observe at least two full orbits before making projections about an NEO’s path, but that hasn’t stopped several in the astronomy from fixing odds on an impact in 2040.

Specifically, those odds are currently at 1 in 625 for an impact on Feb. 5, 2040. But like most odds, these are fluid. From 2013 to 2016, the asteroid will be observable from the ground, and that will give NEO watchers a better idea of its orbit and future trajectory. If those observations don’t vastly diminish the odds of an impact, there should still be time to do something about it before its 2023 keyhole pass.Like Apophis, which may or may not impact Earth in 2036, 2011 AG5 has a keyhole–a region is space near Earth through which it would travel if indeed it is going to impact us on its next pass. It will make its keyhole pass on its approach near Earth in February 2023 when it comes within just 0.02 astronomical units of Earth (that’s roughly 1.86 million miles). NASA’s Jet Propulsion Lab estimates 2011 AG5’s keyhole is about 62 miles wide–not big at all by astronomical standards, but bigger than Apophis’s.

If 2011 AG5 does look like it is going to pass through that keyhole after the 2013-2016 observations, scientists will have a few years to figure out how to alter its orbit and push it outside of the keyhole in 2023, thus averting disaster 17 years later. Such a deflection mission could be good practice. Apophis will make a run at its keyhole in 2029.

 

O planeta doente (culturaebarbarie.org)

por Guy Debord

A “poluição” está hoje na moda, exatamente da mesma maneira que a revolução: ela se apodera de toda a vida da sociedade e é representada ilusoriamente no espetáculo. Ela é tagarelice tediosa numa pletora de escritos e de discursos errôneos e mistificadores, e, nos fatos, ela pega todo mundo pelo pescoço. Ela se expõe em todo lugar enquanto ideologia e ganha terreno enquanto processo real. Esses dois movimentos antagônicos, o estágio supremo da produção mercantil e o projeto de sua negação total, igualmente ricos de contradições em simesmos, crescem em conjunto. São os dois lados pelos quais se manifesta um mesmo momento histórico há muito tempo esperado e freqüentemente previsto sob figuras parciais inadequadas: a impossibilidade da continuação do funcionamento do capitalismo.

A época que tem todos os meios técnicos de alterar as condições de vida na Terra é igualmente a época que, pelo mesmo desenvolvimento técnico e científico separado, dispõe de todos os meios de controle e de previsão matematicamente indubitável para medir com exatidão antecipada para onde conduz — e em que data — o crescimento automático das forças produtivas alienadas da sociedade de classes: isto é, para medir a degradação rápida das condições de sobrevida, no sentido o mais geral e o mais trivial do termo.

Enquanto imbecis passadistas ainda dissertam sobre, e contra, uma crítica estética de tudo isso, e crêem mostrar-se lúcidos e modernos por se mostrarem esposados com seu século, proclamando que a auto-estrada ou Sarcelles têm sua beleza que se deveria preferir ao desconforto dos “pitorescos” bairros antigos ou ainda fazendo observar gravemente que o conjunto da população come melhor, a despeito das nostalgias da boa cozinha, já o problema da degradação da totalidade do ambiente natural e humano deixou completamente de se colocar no plano da pretensa qualidade antiga, estética ou outra, para se tornar radicalmente o próprio problema da possibilidade material de existência do mundo que persegue um tal movimento. A impossibilidade está de fato já perfeitamente demonstrada por todo o conhecimento científico separado, que discute somente sua data de vencimento; e os paliativos que, se fossem aplicados firmemente, a poderiam regular superficialmente. Uma tal ciência apenas pode acompanhar em direção à destruição o mundo que a produziu e que a mantém; mas ela é obrigada a fazê-lo com os olhos abertos. Ela mostra assim, num nível caricatural, a inutilidade do conhecimento sem uso.

Mede-se e se extrapola com uma precisão excelente o aumento rápido da poluição química da atmosfera respirável, da água dos rios, dos lagos e até mesmo dos oceanos; e o aumento irreversível da radioatividade acumulada pelo desenvolvimento pacífico da energia nuclear, dos efeitos do barulho, da invasão do espaço por produtos de materiais plásticos que podem exigir uma eternidade de depósito universal, da natalidade louca, da falsificação insensata dos alimentos, da lepra urbanística que se estende sempre mais no lugar do que antes foram a cidade e o campo; assim como as doenças mentais — aí compreendidas as fobias neuróticas e as alucinações que não poderiam deixar de se multiplicar bem cedo sobre o tema da própria poluição, da qual se mostra em todo lugar a imagem alarmante — e do suicídio, cujas taxas de expansão se entrecruzam já exatamente com as de edificação de um tal ambiente (para não falar dos efeitos da guerra atômica ou bacteriológica, cujos meios estão posicionados como a espada de Dâmocles, mas permanecem evidentemente evitáveis).

Logo, se a amplitude e a própria realidade dos “terrores do Ano Mil” são ainda um assunto controverso entre os historiadores, o terror do Ano Dois Mil é tão patente quanto bem fundado; ele é desde o presente uma certeza científica. Contudo, o que se passa não é em si mesmo nada novo: é somente o fim necessário do antigo processo. Uma sociedade cada vez mais doente, mas cada vez mais poderosa, recriou em todo lugar concretamente o mundo como ambiente e décorde sua doença, enquanto planeta doente. Uma sociedade que não se tornou ainda homogênea e que não é mais determinada por si mesma, mas cada vez maispor uma parte dela mesma que lhe é superior, desenvolveu um movimento de dominação da natureza que contudo não se dominou a si mesmo. O capitalismo finalmente trouxe a prova, por seu próprio movimento, de que ele não pode mais desenvolver as forças produtivas; e isso não quantitativamente, como muitos acreditaram compreender, mas qualitativamente.

Contudo, para o pensamento burguês, metodologicamente, somente o quantitativo é o sério, o mensurável, o efetivo; e o qualitativo é somente a incerta decoração subjetiva ou artística do verdadeiro real estimado em seu verdadeiro peso. Ao contrário, para o pensamento dialético, portanto, para a história e para o proletariado, o qualitativo é a dimensão a mais decisiva do desenvolvimento real. Eis aí o que o capitalismo e nós terminamos por demonstrar.

Os senhores da sociedade são obrigados agora a falar da poluição, tanto para combatê-la (pois eles vivem, apesar de tudo, no mesmo planeta que nós; é este o único sentido ao qual se pode admitir que o desenvolvimento do capitalismo realizou efetivamente uma certa fusão das classes) e para a dissimular, pois a simples verdade dos danos e dos riscos presentes basta para constituir um imenso fator de revolta, uma exigência materialista dos explorados, tão inteiramente vital quanto o foi a luta dos proletários do século XIX pela possibilidade de comer. Após o fracasso fundamental de todos os reformismos do passado — que aspiram todos eles à solução definitiva do problema das classes —, um novo reformismo se desenha, que obedece às mesmas necessidades que os precedentes: lubrificar a máquina e abrir novas oportunidades de lucros às empresas de ponta. O setor mais moderno da indústria se lança nos diferentes paliativos da poluição, como em um novo nicho de mercado, tanto mais rentável quanto mais uma boa parte do capital monopolizado pelo Estado nele está a empregar e a manobrar. Mas se este novo reformismo tem de antemão a garantia de seu fracasso, exatamente pelas mesmas razões que os reformismos passados, ele guarda em face deles a radical diferença de que não tem mais tempo diante de si.

O desenvolvimento da produção se verificou inteiramente até aqui enquanto realização daeconomia política: desenvolvimento da miséria, que invadiu e estragou o próprio meio da vida. A sociedade em que os produtores se matam no trabalho, e cujo resultado devem somente contemplar, lhes deixa claramente ver, e respirar, o resultado geral do trabalho alienado enquanto resultado de morte. Na sociedade da economia superdesenvolvida, tudo entrou na esfera dos bens econômicos, mesmo a água das fontes e o ar das cidades, quer dizer que tudo se tornou o mal econômico, “negação acabada do homem” que atinge agora sua perfeita conclusão material. O conflito entre as forças produtivas modernas e as relações de produção, burguesas ou burocráticas, da sociedade capitalista entrou em sua fase última. A produção da não-vida prosseguiu cada vez mais seu processo linear e cumulativo; vindo a atravessar um último limiar em seu progresso, ela produz agora diretamente a morte.

A função última, confessada, essencial, da economia desenvolvida hoje, no mundo inteiro em que reina o trabalho-mercadoria, que assegura todo o poder a seus patrões, é a produção dos empregos. Está-se bem longe das idéias “progressistas” do século anterior [século XIX] sobre a diminuição possível do trabalho humano pela multiplicação científica e técnica da produtividade, que se supunha assegurar sempre mais facilmente a satisfação das necessidades anteriormente reconhecidas por todos reais e sem alteração fundamental da qualidade mesma dos bens que se encontrariam disponíveis. É presentemente para produzir empregos, até nos campos esvaziados de camponeses, ou seja, para utilizar o trabalho humano enquanto trabalho alienado, enquanto assalariado, que se faz todo o resto; e, portanto, que se ameaça estupidamente as bases, atualmente mais frágeis ainda que o pensamento de um Kennedy ou de um Brejnev, da vida da espécie.

O velho oceano é em si mesmo indiferente à poluição; mas a história não o é. Ela somente pode ser salva pela abolição do trabalho-mercadoria. E nunca a consciência histórica teve tanta necessidade de dominar com tanta urgência seu mundo, pois o inimigo que está à sua porta não é mais a ilusão, mas sua morte.

Quando os pobres senhores da sociedade da qual vemos a deplorável conclusão, bem pior do que todas as condenações que puderam fulminar outrora os mais radicais dos utopistas, devem presentemente reconhecer que nosso ambiente se tornou social, que a gestão detudo se tornou um negócio diretamente político, até as ervas dos campos e a possibilidade de beber, até a possibilidade de dormir sem muitos soníferos ou de tomar um banho sem sofrer de alergias, num tal momento se deve ver também que a velha política especializada deve reconhecer que ela está completamente finda.

Ela está finda na forma suprema de seu voluntarismo: o poder burocrático totalitário dos regimes ditos socialistas, porque os burocratas no poder não se mostraram capazes nem mesmo de gerir o estágio anterior da economia capitalista. Se eles poluem muito menos — apenas os Estados Unidos produzem sozinhos 50% da poluição mundial — é porque são muito mais pobres. Eles somente podem, como por exemplo a China, reunindo em bloco uma parte desproporcionada de sua contabilidade de miséria, comprar a parte de poluição de prestígio das potências pobres, algumas descobertas e aperfeiçoamentos nas técnicas da guerra termonuclear, ou mais exatamente, do espetáculo ameaçador. Tanta pobreza, material e mental, sustentada por tanto terrorismo, condena as burocracias no poder. E o que condena o poder burguês mais modernizado é o resultado insuportável de tanta riquezaefetivamente empestada. A gestão dita democrática do capitalismo, em qualquer país que seja, somente oferece suas eleições-demissões que, sempre se viu, nunca mudava nada no conjunto, e mesmo muito pouco no detalhe, numa sociedade de classes que se imaginava poder durar indefinidamente. Elas aí não mudam nada de mais no momento em que a própria gestão enlouquece e finge desejar, para cortar certos problemas secundários embora urgentes, algumas vagas diretrizes do eleitorado alienado e cretinizado (U.S.A., Itália, Inglaterra, França). Todos os observadores especializados sempre salientaram — sem se preocuparem em explicar — o fato de que o eleitor não muda nunca de “opinião”: é justamente porque é eleitor, o que assume, por um breve instante, o papel abstrato que é precisamente destinado a impedir de ser por si mesmo, e de mudar (o mecanismo foi demonstrado centenas de vezes, tanto pela análise política desmistificada quanto pelas explicações da psicanálise revolucionária). O eleitor não muda mais quando o mundo muda sempre mais precipitadamente em torno dele e, enquanto eleitor, ele não mudaria mesmo às vésperas do fim do mundo. Todo sistema representativo é essencialmente conservador, mesmo se as condições de existência da sociedade capitalista não puderam nunca ser conservadas: elas se modificam sem interrupção, e sempre mais rápido, mas a decisão — que afinal é sempre a decisão de liberar o próprio processo da produção capitalista — é deixada inteiramente aos especialistas da publicidade, quer sejam eles únicos na competição ou em concorrência com aqueles que vão fazer a mesma coisa, e aliás o anunciam abertamente. Contudo, o homem que vota “livremente” nos gaullistas ou no P.C.F., tanto quanto o homem que vota, constrangido e forçado, num Gomulka, é capaz de mostrar o que ele verdadeiramente é, na semana seguinte, participando de uma greve selvagem ou de uma insurreição.

A autoproclamada “luta contra a poluição”, por seu aspecto estatal e legalista, vai de início criar novas especializações, serviços ministeriais, cargos, promoção burocrática. E sua eficácia estará completamente na medida de tais meios. Mas ela somente pode se tornar uma vontade real ao transformar o sistema produtivo atual em suas próprias raízes. E somente pode ser aplicada firmemente no instante em que todas suas decisões, tomadas democraticamente em conhecimento pleno de causa, pelos produtores, estiverem a todo instante controladas e executadas pelos próprios produtores (por exemplo, os navios derramarão infalivelmente seu petróleo no mar enquanto não estiverem sob a autoridade de reais soviets de marinheiros). Para decidir e executar tudo isso, é preciso que os produtores se tornem adultos: é preciso que se apoderem todos do poder.

O otimismo científico do século XIX se desmoronou em três pontos essenciais. Primeiro, a pretensão de garantir a revolução como resolução feliz dos conflitos existentes (esta era a ilusão hegelo-esquerdista e marxista; a menos notada naintelligentsia burguesa, mas a mais rica e, afinal, a menos ilusória). Segundo, a visão coerente do universo, e mesmo simplesmente, da matéria. Terceiro, o sentimento eufórico e linear do desenvolvimento das forças produtivas. Se nós dominarmos o primeiro ponto, teremos resolvido o terceiro; e saberemos fazer bem mais tarde do segundo nossa ocupação e nosso jogo. Não é preciso tratar dos sintomas, mas da própria doença. Hoje o medo está em todo lugar, somente sairemos dele confiando-nos em nossas próprias forças, em nossa capacidade de destruir toda alienação existente e toda imagem do poder que nos escapou. Remetendo tudo, com exceção de nós próprios, ao único poder dos Conselhos de Trabalhadores possuindo e reconstruindo a todo instante a totalidade do mundo, ou seja, à racionalidade verdadeira, a uma legitimidade nova.

Em matéria de ambiente “natural” e construído, de natalidade, de biologia, de produção, de “loucura” etc., não haverá que escolher entre a festa e a infelicidade, mas, conscientemente e em cada encruzilhada, entre, de um lado, mil possibilidades felizes ou desastrosas, relativamente corrigíveis, e, de outra parte, o nada. As escolhas terríveis do futuro próximo deixam esta única alternativa: democracia total ou burocracia total. Aqueles que duvidam da democracia total devem esforçar-se para fazer por si mesmos a prova dela, dando-lhe a oportunidade de se provar em marcha; ou somente lhes resta comprar seu túmulo a prestações, pois “a autoridade, se a viu em obra, e suas obras a condenam” (Jacques Déjacque).

“A revolução ou a morte”: esse slogan não é mais a expressão lírica da consciência revoltada, é a última palavra do pensamento científico de nosso século [XX]. Isso se aplica aos perigos da espécie como à impossibilidade de adesão pelos indivíduos. Nesta sociedade em que o suicídio progride como se sabe, os especialistas tiveram que reconhecer, com um certo despeito, que ele caíra a quase nada em maio de 1968. Essa primavera obteve assim, sem precisamente subi-lo em assalto, um bom céu, porque alguns carros queimaram e porque a todos os outros faltou combustível para poluir. Quando chove, quando há nuvens sobre Paris, não esqueçam nunca que isso é responsabilidade do governo. A produção industrial alienada faz chover. A revolução faz o bom tempo.

Escrito em 1971, por Guy Debord, para aparecer no nº 13 da revista Internacional Situacionista, este artigo permaneceu inédito até recentemente, quando foi publicado, junto com dois outros textos do mesmo autor, em La Planète malade (Paris, Gallimard, 2004, pp. 77-94). A tradução de “O planeta doente” aqui publicada apareceu pela primeira vez em http://juralibertaire.over-blog.com/article-13908597.html. Tradução de Emiliano Aquino (http://emilianoaquino.blogspot.com/).

Fonte:  http://culturaebarbarie.org/sopro/arquivo/planetadoente.html

19 Climate Games that Could Change the Future (Climate Interactive Blog)

By 

March 9, 2012 – 10:13 a.m.

The prevalence of games in our culture provides an opportunity to increase the understanding of our global challenges. In 2008 the Pew Research Centerestimated that over half of American adults played video games and 80% of young Americans play video games. The vast majority of these games serve purely to entertain. There are a growing number of games that aim to make a difference, however. These games range from those that show players the complexity of creating adequate aid packages and delivering them to places in need to games thatrequire people to get out and work to improve their communities to do well in the game.

Looking at the climate change challenge there are a number of games and interactive tools to broaden our understanding of the dynamics involved.Climate Interactive, for one, has led the development of the role-playing game World Climate, which simulates the UN climate change negotiations and is being adopted from middle school all the way up to executive management-level classrooms. Many are recognizing the power of games and everyone from government agencies to NGOs to a group of teenagers is trying to launch a game to help address climate change. Below are some of the climate and sustainability-related games we’ve found. Let us know if you’ve found others.

Computer Games:

Climate Challenge

1. Climate Challenge: The player acts as a European leader who must make decisions for their nation to reduce CO2 emissions, but must also keep in mind public and international approval, energy, food, and financial needs.

2. Fate of the World: A PC game that challenges players to solve the crises facing the Earth from natural disasters and climate change to political uprisings and international relations.

3. CEO2: A game that puts players at the head of a company in one of four industries. The player must then make decisions to reduce the CO2 and maintain (and increase) the company’s value.

4. VGas: Users build a house and select the best furnishing and lifestyle choices to have the lowest carbon footprint.

5. CO2FX: A multi-player educational game, designed for students in high school, which explores the relationship of climate change to economic, political, and science policy decisions.

6. “Operation: Climate Control” Game: A multi-player computer game where the player’s role is to decide on local environmental policy for Europe through the 21st century.

My2050

7. My2050: An interactive game to determine a scenario for the UK to lower its CO2 emissions 20% below 1990 levels by 2050. The user can select from adjustments in sectors from energy to transit.

8. Plan it Green: Gamers act as the planners of a city to revitalize it to become a greener town through energy retrofits, clean energy jobs, and green building.

9. Logicity: A game that challenges players to reduce their carbon footprints by making decisions in a virtual city.

10. Electrocity: A game designed for school children in New Zealand to plan a city that balances the needs of energy, development, and the environment.

11. Climate Culture: A virtual social networking game based on players’ actual carbon footprints and lifestyle choices. Players compete to earn badges and awards for their decisions.

12. World Without Oil: An alternate reality game that was played out on blogs and other social media platforms for 32 weeks in 2007 by thousands of players to simulate what might happen if there was an oil crisis and oil became inaccessible. Participants wrote blogs and made videos about their experience as if it was real.

13. SimCity 5 (coming 2013): With over 20 years of experience and millions of players the SimCity series has captured imaginations by putting players in control of developing cities. Recently announced, SimCity 5 will add among other things the need to face sustainability challenges like climate change, limited natural resources, and urban walkability.

Role-playing Games:

14. World Climate Exercise: A role-playing game for groups that simulates the UN climate change negotiations by dividing the group into regional and national negotiating teams to negotiate a treaty to 2 degrees or less. 

15. “Stabilization Wedge” Game: A game to show participants the different ways to cut carbon emissions, through the concept of wedges.

Board Games:

16. Climate Catan: Building on the widely popular board game Settlers of Catan, this version adds oil as resource that spurs development but if too much is used it also instigates a climate related disaster which can ruin development.

17. Climate-Poker: A card game with the aim to have the largest climate conference in order to address climate change.

18. Keep Cool- Gambling with the Climate: Players take on the roles of national political leaders trying to address climate change and must make decisions about the type of growth and balance the desires of lobby groups and challenges of natural disasters.

19. Polar Eclipse Game: A game where players navigate different decisions in order to chart a path to future that avoids the worst temperature rise.

Lessons from Gaming for Climate Wonks and Leaders — Video

By 

Games can help us ensure that climate and energy analysis gets used to make a difference. Last week at the Climate Prediction Applications Science Workshopin Miami, Climate Interactive co-director Drew Jones, gave a keynote presentation to an audience of climate analysts, many who are working to communicate the massive amount of climate data to the public.

In Drew’s speech below, he draws out the key things that we are learning from games, like Angry Birds, Farmville, World of Warcraft, and the existing efforts to integrate climate change into games. Also included in this presentation, but left out of the video, was a condensed version of the World Climate Exercise, a game that Climate Interactive has developed to help people explore the complex dynamics encountered at the international climate change negotiations.

You can’t do the math without the words (University of Miami Press Release)

University of Miami anthropological linguist studies the anumeric language of an Amazonian tribe; the findings add new perspective to the way people acquire knowledge, perception and reasoning

Marie Guma Diaz
University of Miami

 VIDEO: Caleb Everett, assistant professor in the department of anthropology at the University of Miami College of Arts and Sciences, talks about the unique insight we gain about people by studying…

CORAL GABLES, FL (February 20, 2012)–Most people learn to count when they are children. Yet surprisingly, not all languages have words for numbers. A recent study published in the journal ofCognitive Science shows that a few tongues lack number words and as a result, people in these cultures have a difficult time performing common quantitative tasks. The findings add new insight to the way people acquire knowledge, perception and reasoning.

The Piraha people of the Amazon are a group of about 700 semi-nomadic people living in small villages of about 10-15 adults, along the Maici River, a tributary of the Amazon. According to University of Miami (UM) anthropological linguist Caleb Everett, the Piraha are surprisingly unable to represent exact amounts. Their language contains just three imprecise words for quantities: Hòi means “small size or amount,” hoì, means “somewhat larger amount,” and baàgiso indicates to “cause to come together, or many.” Linguists refer to languages that do not have number specific words as anumeric.

“The Piraha is a really fascinating group because they are really only one or two groups in the world that are totally anumeric,” says Everett, assistant professor in the Department of Anthropology at the UM College of Arts and Sciences. “This is maybe one of the most extreme cases of language actually restricting how people think.”

His study “Quantity Recognition Among speakers of an Anumeric Language” demonstrates that number words are essential tools of thought required to solve even the simplest quantitative problems, such as one-to-one correspondence.

“I’m interested in how the language you speak affects the way that you think,” says Everett. “The question here is what tools like number words really allows us to do and how they change the way we think about the world.”

The work was motivated by contradictory results on the numerical performance of the Piraha. An earlier article reported the people incapable of performing simple numeric tasks with quantities greater than three, while another showed they were capable of accomplishing such tasks.

Everett repeated all the field experiments of the two previous studies. The results indicated that the Piraha could not consistently perform simple mathematical tasks. For example, one test involved 14 adults in one village that were presented with lines of spools of thread and were asked to create a matching line of empty rubber balloons. The people were not able to do the one-to-one correspondence, when the numbers were greater than two or three.

The study provides a simple explanation for the controversy. Unbeknown to other researchers, the villagers that participated in one of the previous studies had received basic numerical training by Keren Madora, an American missionary that has worked with the indigenous people of the Amazon for 33 years, and co-author of this study. “Her knowledge of what had happened in that village was crucial. I understood then why they got the results that they did,” Everett says.

Madora used the Piraha language to create number words. For instance she used the words “all the sons of the hand,” to indicate the number four. The introduction of number words into the village provides a reasonable explanation for the disagreement in the previous studies.

The findings support the idea that language is a key component in processes of the mind. “When they’ve been introduced to those words, their performance improved, so it’s clearly a linguistic effect, rather than a generally cultural factor,” Everett says. The study highlights the unique insight we gain about people and society by studying mother languages.

“Preservation of mother tongues is important because languages can tell us about aspects of human history, human cognition, and human culture that we would not have access to if the languages are gone,” he says. “From a scientific perspective I think it’s important, but it’s most important from the perspective of the people, because they lose a lot of their cultural heritage when their languages die.”

Will one researcher’s discovery deep in the Amazon destroy the foundation of modern linguistics? (The Chronicle of Higher Education)

The Chronicle Review

By Tom Bartlett

March 20, 2012

Angry Words

chomsky everett

A Christian missionary sets out to convert a remote Amazonian tribe. He lives with them for years in primitive conditions, learns their extremely difficult language, risks his life battling malaria, giant anacondas, and sometimes the tribe itself. In a plot twist, instead of converting them he loses his faith, morphing from an evangelist trying to translate the Bible into an academic determined to understand the people he’s come to respect and love.

Along the way, the former missionary discovers that the language these people speak doesn’t follow one of the fundamental tenets of linguistics, a finding that would seem to turn the field on its head, undermine basic assumptions about how children learn to communicate, and dethrone the discipline’s long-reigning king, who also happens to be among the most well-known and influential intellectuals of the 20th century.

It feels like a movie, and it may in fact turn into one—there’s a script and producers on board. It’s already a documentary that will air in May on the Smithsonian Channel. A play is in the works in London. And the man who lived the story, Daniel Everett, has written two books about it. His 2008 memoir Don’t Sleep, There Are Snakes, is filled with Joseph Conrad-esque drama. The new book, Language: The Cultural Tool, which is lighter on jungle anecdotes, instead takes square aim at Noam Chomsky, who has remained the pre-eminent figure in linguistics since the 1960s, thanks to the brilliance of his ideas and the force of his personality.

But before any Hollywood premiere, it’s worth asking whether Everett actually has it right. Answering that question is not straightforward, in part because it hinges on a bit of grammar that no one except linguists ever thinks about. It’s also made tricky by the fact that Everett is the foremost expert on this language, called Pirahã, and one of only a handful of outsiders who can speak it, making it tough for others to weigh in and leading his critics to wonder aloud if he has somehow rigged the results.

More than any of that, though, his claim is difficult to verify because linguistics is populated by a deeply factionalized group of scholars who can’t agree on what they’re arguing about and who tend to dismiss their opponents as morons or frauds or both. Such divisions exist, to varying degrees, in all disciplines, but linguists seem uncommonly hostile. The word “brutal” comes up again and again, as do “spiteful,” “ridiculous,” and “childish.”

With that in mind, why should anyone care about the answer? Because it might hold the key to understanding what separates us from the rest of the animals.

Imagine a linguist from Mars lands on Earth to survey the planet’s languages (presumably after obtaining the necessary interplanetary funding). The alien would reasonably conclude that the languages of the world are mostly similar with interesting but relatively minor variations.

As science-fiction premises go it’s rather dull, but it roughly illustrates Chomsky’s view of linguistics, known as Universal Grammar, which has dominated the field for a half-century. Chomsky is fond of this hypothetical and has used it repeatedly for decades, including in a 1971 discussion with Michel Foucault, during which he added that “this Martian would, if he were rational, conclude that the structure of the knowledge that is acquired in the case of language is basically internal to the human mind.”

In his new book, Everett, now dean of arts and sciences at Bentley University, writes about hearing Chomsky bring up the Martian in a lecture he gave in the early 1990s. Everett noticed a group of graduate students in the back row laughing and exchanging money. After the talk, Everett asked them what was so funny, and they told him they had taken bets on precisely when Chomsky would once again cite the opinion of the linguist from Mars.

The somewhat unkind implication is that the distinguished scholar had become so predictable that his audiences had to search for ways to amuse themselves. Another Chomsky nugget is the way he responds when asked to give a definition of Universal Grammar. He will sometimes say that Universal Grammar is whatever made it possible for his granddaughter to learn to talk but left the world’s supply of kittens and rocks speechless—a less-than-precise answer. Say “kittens and rocks” to a cluster of linguists and eyes are likely to roll.

Chomsky’s detractors have said that Universal Grammar is whatever he needs it to be at that moment. By keeping it mysterious, they contend, he is able to dodge criticism and avoid those who are gunning for him. It’s hard to murder a phantom.

Everett’s book is an attempt to deliver, if not a fatal blow, then at least a solid right cross to Universal Grammar. He believes that the structure of language doesn’t spring from the mind but is instead largely formed by culture, and he points to the Amazonian tribe he studied for 30 years as evidence. It’s not that Everett thinks our brains don’t play a role—they obviously do. But he argues that just because we are capable of language does not mean it is necessarily prewired. As he writes in his book: “The discovery that humans are better at building human houses than porpoises tells us nothing about whether the architecture of human houses is innate.”

The language Everett has focused on, Pirahã, is spoken by just a few hundred members of a hunter-gatherer tribe in a remote part of Brazil. Everett got to know the Pirahã in the late 1970s as an American missionary. With his wife and kids, he lived among them for months at a time, learning their language from scratch. He would point to objects and ask their names. He would transcribe words that sounded identical to his ears but had completely different meanings. His progress was maddeningly slow, and he had to deal with the many challenges of jungle living. His story of taking his family, by boat, to get treatment for severe malaria is an epic in itself.

His initial goal was to translate the Bible. He got his Ph.D. in linguistics along the way and, in 1984, spent a year studying at the Massachusetts Institute of Technology in an office near Chomsky’s. He was a true-blue Chomskyan then, so much so that his kids grew up thinking Chomsky was more saint than professor. “All they ever heard about was how great Chomsky was,” he says. He was a linguist with a dual focus: studying the Pirahã language and trying to save the Pirahã from hell. The second part, he found, was tough because the Pirahã are rooted in the present. They don’t discuss the future or the distant past. They don’t have a belief in gods or an afterlife. And they have a strong cultural resistance to the influence of outsiders, dubbing all non-Pirahã “crooked heads.” They responded to Everett’s evangelism with indifference or ridicule.

As he puts it now, the Pirahã weren’t lost, and therefore they had no interest in being saved. They are a happy people. Living in the present has been an excellent strategy, and their lack of faith in the divine has not hindered them. Everett came to convert them, but over many years found that his own belief in God had melted away.

So did his belief in Chomsky, albeit for different reasons. The Pirahã language is remarkable in many respects. Entire conversations can be whistled, making it easier to communicate in the jungle while hunting. Also, the Pirahã don’t use numbers. They have words for amounts, like a lot or a little, but nothing for five or one hundred. Most significantly, for Everett’s argument, he says their language lacks what linguists call “recursion”—that is, the Pirahã don’t embed phrases in other phrases. They instead speak only in short, simple sentences.

In a recursive language, additional phrases and clauses can be inserted in a sentence, complicating the meaning, in theory indefinitely. For most of us, the lack of recursion in a little-known Brazilian language may not seem terribly interesting. But when Everett published a paper with that finding in 2005, the news created a stir. There were magazine articles and TV appearances. Fellow linguists weighed in, if only in some cases to scoff. Everett had put himself and the Pirahã on the map.

His paper might have received a shrug if Chomsky had not recently co-written a paper, published in 2002, that said (or seemed to say) that recursion was the single most important feature of human language. “In particular, animal communication systems lack the rich expressive and open-ended power of human language (based on humans’ capacity for recursion),” the authors wrote. Elsewhere in the paper, the authors wrote that the faculty of human language “at minimum” contains recursion. They also deemed it the “only uniquely human component of the faculty of language.”

In other words, Chomsky had finally issued what seemed like a concrete, definitive statement about what made human language unique, exposing a possible vulnerability. Before Everett’s paper was published, there had already been back and forth between Chomsky and the authors of a response to the 2002 paper, Ray Jackendoff and Steven Pinker. In the wake of that public disagreement, Everett’s paper had extra punch.

It’s been said that if you want to make a name for yourself in modern linguistics, you have to either align yourself with Chomsky or seek to destroy him. Either you are desirous of his approval or his downfall. With his 2005 paper, Everett opted for the latter course.

Because the pace of academic debate is just this side of glacial, it wasn’t until June 2009 that the next major chapter in the saga was written. Three scholars who are generally allies of Chomsky published a lengthy paper in the journal Language dissecting Everett’s claims one by one. What he considered unique features of Pirahã weren’t unique. What he considered “gaps” in the language weren’t gaps. They argued this in part by comparing Everett’s recent paper to work he published in the 1980s, calling it, slightly snidely, his earlier “rich material.” Everett wasn’t arguing with Chomsky, they claimed; he was arguing with himself. Young Everett thought Pirahã had recursion. Old Everett did not.

Everett’s defense was, in so many words, to agree. Yes, his earlier work was contradictory, but that’s because he was still under Chomsky’s sway when he wrote it. It’s natural, he argued, even when doing basic field work, cataloging the words of a language and the stories of a people, to be biased by your theoretical assumptions. Everett was a Chomskyan through and through, so much so that he had written the MSN Encarta encyclopedia entry on him. But now, after more years with the Pirahã, the scales had fallen from his eyes, and he saw the language on its own terms rather than those he was trying to impose on it.

David Pesetsky, a linguistics professor at MIT and one of the authors of the critical Languagepaper, thinks Everett was trying to gin up a “Star Wars-level battle between himself and the forces of Universal Grammar,” presumably with Everett as Luke Skywalker and Chomsky as Darth Vader.

Contradicting Everett meant getting into the weeds of the Pirahã language, a language that Everett knew intimately and his critics did not. “Most people took the attitude that this wasn’t worth taking on,” Pesetsky says. “There’s a junior-high-school corridor, two kids are having a fight, and everyone else stands back.” Everett wrote a lengthy reply that Pesetsky and his co-authors found unsatisfying and evasive. “The response could have been ‘Yeah, we need to do this more carefully,'” says Pesetsky. “But he’s had seven years to do it more carefully and he hasn’t.”

Critics haven’t just accused Everett of inaccurate analysis. He’s the sole authority on a language that he says changes everything. If he wanted to, they suggest, he could lie about his findings without getting caught. Some were willing to declare him essentially a fraud. That’s what one of the authors of the 2009 paper, Andrew Nevins, now at University College London, seems to believe. When I requested an interview with Nevins, his reply read, “I may be being glib, but it seems you’ve already analyzed this kind of case!” Below his message was a link to an article I had written about a Dutch social psychologist who had admitted to fabricating results, including creating data from studies that were never conducted. In another e-mail, after declining to expand on his apparent accusation, Nevins wrote that the “world does not need another article about Dan Everett.”

In 2007, Everett heard reports of a letter signed by Cilene Rodrigues, who is Brazilian, and who co-wrote the paper with Pesetsky and Nevins, that accuses him of racism. According to Everett, he got a call from a source informing him that Rodrigues, an honorary research fellow at University College London, had sent a letter to the organization in Brazil that grants permission for researchers to visit indigenous groups like the Pirahã. He then discovered that the organization, called FUNAI, the National Indian Foundation, would no longer grant him permission to visit the Pirahã, whom he had known for most of his adult life and who remain the focus of his research.

He still hasn’t been able to return. Rodrigues would not respond directly to questions about whether she had signed such a letter, nor would Nevins. Rodrigues forwarded an e-mail from another linguist who has worked in Brazil, which speculates that Everett was denied access to the Pirahã because he did not obtain the proper permits and flouted the law, accusations Everett calls “completely false” and “amazingly nasty lies.”

Whatever the reason for his being blocked, the question remains: Is Everett’s work racist? The accusation goes that because Everett says that the Pirahã do not have recursion, and that all human languages supposedly have recursion, Everett is asserting that the Pirahã are less than human. Part of this claim is based on an online summary, written by a former graduate student of Everett’s, that quotes traders in Brazil saying the Pirahã “talk like chickens and act like monkeys,” something Everett himself never said and condemns. The issue is sensitive because the Pirahã, who eschew the trappings of modern civilization and live the way their forebears lived for thousands of years, are regularly denigrated by their neighbors in the region as less than human. The fact that Everett is American, not Brazilian, lends the charge added symbolic weight.

When you read Everett’s two books about the Pirahã, it is nearly impossible to think that he believes they are inferior. In fact, he goes to great lengths not to condescend and offers defenses of practices that outsiders would probably find repugnant. In one instance he describes, a Pirahã woman died, leaving behind a baby that the rest of the tribe thought was too sick to live. Everett cared for the infant. One day, while he was away, members of the tribe killed the baby, telling him that it was in pain and wanted to die. He cried, but didn’t condemn, instead defending in the book their seemingly cruel logic.

Likewise, the Pirahã’s aversion to learning agriculture, or preserving meat, or the fact that they show no interest in producing artwork, is portrayed by Everett not as a shortcoming but as evidence of the Pirahã’s insistence on living in the present. Their nonhierarchical social system seems to Everett fair and sensible. He is critical of his own earlier attempts to convert the Pirahã to Christianity as a sort of “colonialism of the mind.” If anything, Everett is more open to a charge of romanticizing the Pirahã culture.

Other critics are more measured but equally suspicious. Mark Baker, a linguist at Rutgers University at New Brunswick, who considers himself part of Chomsky’s camp, mentions Everett’s “vested motive” in saying that the Pirahã don’t have recursion. “We always have to be a little careful when we have one person who has researched a language that isn’t accessible to other people,” Baker says. He is dubious of Everett’s claims. “I can’t believe it’s true as described,” he says.

Chomsky hasn’t exactly risen above the fray. He told a Brazilian newspaper that Everett was a “charlatan.” In the documentary about Everett, Chomsky raises the possibility, without saying he believes it, that Everett may have faked his results. Behind the scenes, he has been active as well. According to Pesetsky, Chomsky asked him to send an e-mail to David Papineau, a professor of philosophy at King’s College London, who had written a positive, or at least not negative, review of Don’t Sleep, There Are Snakes. The e-mail complained that Papineau had misunderstood recursion and was incorrectly siding with Everett. Papineau thought he had done nothing of the sort. “For people outside of linguistics, it’s rather surprising to find this kind of protection of orthodoxy,” Papineau says.

And what if the Pirahã don’t have recursion? Rather than ferreting out flaws in Everett’s work as Pesetsky did, Chomsky’s preferred response is to say that it doesn’t matter. In a lecture he gave last October at University College London, he referred to Everett’s work without mentioning his name, talking about those who believed that “exceptions to the generalizations are considered lethal.” He went on to say that a “rational reaction” to finding such exceptions “isn’t to say ‘Let’s throw out the field.'” Universal Grammar permits such exceptions. There is no problem. As Pesetsky puts it: “There’s nothing that says languages without subordinate clauses can’t exist.”

Except the 2002 paper on which Chomsky’s name appears. Pesetsky and others have backed away from that paper, arguing not that it was incorrect, but that it was “written in an unfortunate way” and that the authors were “trying to make certain things comprehensible about linguistics to a larger public, but they didn’t make it clear that they were simplifying.” Some say that Chomsky signed his name to the paper but that it was actually written by Marc Hauser, the former professor of psychology at Harvard University, who resigned after Harvard officials found him guilty of eight counts of research misconduct. (For the record, no one has suggested the alleged misconduct affected his work with Chomsky.)

Chomsky declined to grant me an interview. Those close to him say he sees Everett as seizing on a few stray, perhaps underexplained, lines from that 2002 paper and distorting them for his own purposes. And the truth, Chomsky has made clear, should be apparent to any rational person.

Ted Gibson has heard that one before. When Gibson, a professor of cognitive sciences at MIT, gave a paper on the topic at a January meeting of the Linguistic Society of America, held in Portland, Ore., Pesetsky stood up at the end to ask a question. “His first comment was that Chomsky never said that. I went back and found the slide,” he says. “Whenever I talk about this question in front of these people I have to put up the literal quote from Chomsky. Then I have to put it up again.”

Geoffrey Pullum, a professor of linguistics at the University of Edinburgh, is also vexed at how Chomsky and company have, in his view, played rhetorical sleight-of-hand to make their case. “They have retreated to such an extreme degree that it says really nothing,” he says. “If it has a sentence longer than three words then they’re claiming they were right. If that’s what they claim, then they weren’t claiming anything.” Pullum calls this move “grossly dishonest and deeply silly.”

Everett has been arguing about this for seven years. He says Pirahã undermines Universal Grammar. The other side says it doesn’t. In an effort to settle the dispute, Everett asked Gibson, who holds a joint appointment in linguistics at MIT, to look at the data and reach his own conclusions. He didn’t provide Gibson with data he had collected himself because he knows his critics suspect those data have been cooked. Instead he provided him with sentences and stories collected by his missionary predecessor. That way, no one could object that it was biased.

In the documentary about Everett, handing over the data to Gibson is given tremendous narrative importance. Everett is the bearded, safari-hatted field researcher boating down a river in the middle of nowhere, talking and eating with the natives. Meanwhile, Gibson is the nerd hunched over his keyboard back in Cambridge, crunching the data, examining it with his research assistants, to determine whether Everett really has discovered something. If you watch the documentary, you get the sense that what Gibson has found confirms Everett’s theory. And that’s the story you get from Everett, too. In our first interview, he encouraged me to call Gibson. “The evidence supports what I’m saying,” he told me, noting that he and Gibson had a few minor differences of interpretation.

But that’s not what Gibson thinks. Some of what he found does support Everett. For example, he’s confirmed that Pirahã lacks possessive recursion, phrases like “my brother’s mother’s house.” Also, there appear to be no conjunctions like “and” or “or.” In other instances, though, he’s found evidence that seems to undercut Everett’s claims—specifically, when it comes to noun phrases in sentences like “His mother, Itaha, spoke.”

That is a simple sentence, but inserting the mother’s name is a hallmark of recursion. Gibson’s paper, on which Everett is a co-author, states, “We have provided suggestive evidence that Pirahã may have sentences with recursive structures.”

If that turns out to be true, it would undermine the primary thesis of both of Everett’s books about the Pirahã. Rather than the hero who spent years in the Amazon emerging with evidence that demolished the field’s predominant theory, Everett would be the descriptive linguist who came back with a couple of books full of riveting anecdotes and cataloged a language that is remarkable, but hardly changes the game.

Everett only realized during the reporting of this article that Gibson disagreed with him so strongly. Until then, he had been saying that the results generally supported his theory. “I don’t know why he says that,” Gibson says. “Because it doesn’t. He wrote that our work corroborates it. A better word would be falsified. Suggestive evidence is against it right now and not for it.” Though, he points out, the verdict isn’t final. “It looks like it is recursive,” he says. “I wouldn’t bet my life on it.”

Another researcher, Ray Jackendoff, a linguist at Tufts University, was also provided the data and sees it slightly differently. “I think we decided there is some embedding but it is of limited depth,” he says. “It’s not recursive in the sense that you can have infinitely deep embedding.” Remember that in Chomsky’s paper, it was the idea that “open-ended” recursion was possible that separated human and animal communication. Whether the kind of limited recursion Gibson and Jackendoff have noted qualifies depends, like everything else in this debate, on the interpretation.

Everett thinks what Gibson has found is not recursion, but rather false starts, and he believes further research will back him up. “These are very short, extremely limited examples and they almost always are nouns clarifying other nouns,” he says. “You almost never see anything but that in these cases.” And he points out that there still doesn’t seem to be any evidence of infinite recursion. Says Everett: “There simply is no way, even if what I claim to be false starts are recursive instead, to say, “‘My mother, Susie, you know who I mean, you like her, is coming tonight.'”

The field has a history of theoretical disagreements that turn ugly. In the book The Linguistic Wars, published in 1995, Randy Allen Harris tells the story of another skirmish between Chomsky and a group of insurgent linguists called generative semanticists. Chomsky dismissed his opponents’ arguments as absurd. His opponents accused him of altering his theories when confronted and of general arrogance. “Chomsky has the impressive rhetorical talent of offering ideas which are at once tentative and fully endorsed, of appearing to take the if out of his arguments while nevertheless keeping it safely around,” writes Harris.

That rhetorical talent was on display in his lecture last October, in which he didn’t just disagree with other linguists, but treated their arguments as ridiculous and a mortal danger to the field. The style seems to be reflected in his political activism. Watch his 1969 debate on Firing Lineagainst William F. Buckley Jr., available on YouTube, and witness Chomsky tie his famous interlocutor in knots. It is a thorough, measured evisceration. Chomsky is willing to deploy those formidable skills in linguistic arguments as well.

Everett is far from the only current Chomsky challenger. Recently there’s been a rise in so-called corpus linguistics, a data-driven method of evaluating a language, using computer software to analyze sentences and phrases. The method produces detailed information and, for scholars like Gibson, finally provides scientific rigor for a field he believes has been mired in never-ending theoretical disputes. That, along with the brain-scanning technology that linguists are increasingly making use of, may be able to help resolve questions about how much of the structure of language is innate and how much is shaped by culture.

But Chomsky has little use for that method. In his lecture, he deemed corpus linguistics nonscientific, comparing it to doing physics by describing the swirl of leaves on a windy day rather than performing experiments. This was “just statistical modeling,” he said, evidence of a “kind of pathology in the cognitive sciences.” Referring to brain scans, Chomsky joked that the only way to get a grant was to propose an fMRI.

As for Universal Grammar, some are already writing its obituary. Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology, has stated flatly that “Universal Grammar is dead.” Two linguists, Nicholas Evans and Stephen Levinson, published a paper in 2009 titled “The Myth of Language Universals,” arguing that the “claims of Universal Grammar … are either empirically false, unfalsifiable, or misleading in that they refer to tendencies rather than strict universals.” Pullum has a similar take: “There is no Universal Grammar now, not if you take Chomsky seriously about the things he says.”

Gibson puts it even more harshly. Just as Chomsky doesn’t think corpus linguistics is science, Gibson doesn’t think Universal Grammar is worthwhile. “The question is, ‘What is it?’ How much is built-in and what does it do? There are no details,” he says. “It’s crazy to say it’s dead. It was never alive.”

Such proclamations have been made before and Chomsky, now 83, has a history of outmaneuvering and outlasting his adversaries. Whether Everett will be yet another in a long line of would-be debunkers who turn into footnotes remains to be seen. “I probably do, despite my best intentions, hope that I turn out to be right,” he says. “I know that it is not scientific. But I would be a hypocrite if I didn’t admit it.”

New report reveals how corporations undermine science with fake bloggers and bribes (io9)

BY ANNALEE NEWITZ

MAR 9, 2012 2:22 PM

You’ve probably heard about how the tobacco industry tried to suppress scientific evidence that smoking causes cancer by publishing shady research, bribing politicians, and pressuring researchers. But you may not have realized that tabacco’s dirty tricks are just the tip of the iceberg. In a disturbing new report published by the Union of Concerned Scientists about corporate corruption of the sciences, you’ll learn about how Monsanto hired a public relations team to invent fake people who harassed a scientific journal online, how Coca Cola offers bribes to suppress evidence that soft drinks harm kids’ teeth, and more. Here are some of the most egregious recent examples of corruption from this must-read report.

The report is a meaty assessment of corporate corruption in science that stretches back to incidents with Big Tobacco in the 1960s, up through contemporary examples. Here are just a few of those.

One way that corporations prevent negative information about their products from getting out is by harassing scientists and the journals that publish them. Here’s how Monsanto did it:

Dr. Ingacio Chapela of the University of California–Berkeley and graduate student David Quist published an article in Nature showing that DNA from genetically modified corn was contaminating native Mexican corn. The research spurred immediate backlash.Nature received a number of letters to the editor, including several comments on the Internet from “Mary Murphy” and “Andura Smetacek” accusing the scientists of bias. The backlash prompted Nature to publish an editorial agreeing that the report should not have been published. However, investigators eventually discovered that the comments from Murphy and Smetacek originated with The Bivings Group, a public relations firm that specializes in online communications and had worked for Monstanto. Mary Murphy and Andura Smetacek were found to be fictional names.

Corporations also form front organizations to hide their efforts to undermine science. That’s what happened when producers of unhealthy food got together to cast doubt on the FDA’s recommended health guidelines:

The Center for Consumer Freedom is a nonprofit that targets dietary guidelines recommended by the FDA, other government agencies, medical associations, and consumer advocacy organizations. The center has run ads and owns a website that accuses government agencies of overregulation, and has published articles claiming to refute evidence that high salt intake and other dietary guidelines are based on inadequate science. The center was founded with a $600,000 grant from Philip Morris, but has also received funding from Cargill, National Steak and Poultry, Monsanto, Coca-Cola, and Sutter Home Winery.

Sometimes corporations just go for it and buy off legit organizations, as Coca Cola did when they appear to have paid dentists to stop saying kids shouldn’t drink Coke:

In 2003, the American Academy of Pediatric Dentistry accepted a $1 million donation from Coca-Cola. That year, the group claimed that “scientific evidence is certainly not clear on the exact role that soft drinks play in terms of children’s oral disease.” The statement directly contradicted the group’s previous stance that “consumption of sugars in any beverage can be a significant factor…that contributes to the initiation and progression of dental caries.”

Corporations can also unduly influence federal agencies, as ReGen did when they wanted their device approved for trials by the FDA, despite serious medical problems:

ReGen Biologics attempted to gain FDA approval for clinical trials of Menaflex, a device it developed to replace knee cartilage. After an FDA panel rejected the device, the company enlisted four members of Congress from its home state of New Jersey to influence the evaluation process. In December 2007, Senator Frank Lautenberg, Senator Robert Menendez, and Representative Steve Rothman wrote to FDA Commissioner Andrew von Eschenbach asking him to personally look into Menaflex. Soon thereafter, the commissioner met with ReGen executives and heeded the company’s advice to have Dr. Daniel Shultz, head of the FDA’s medical devices division, oversee a new review. The FDA fast-tracked and approved the product despite serious concerns from the scientific community.

If bribery doesn’t work, you can always censor negative results, the way pharmaceutical company Boots did:

Boots commissioned Dr. Betty Dong, a scientist at the University of California–San Francisco, to test the effects of Synthroid, a replacement for thyroid hormone. Boots hoped to reveal that despite its high price, Synthroid was more effective than similar drugs. The company closely monitored the research, and when Dong found that the drug was no more effective than its competitors, instructed her not to publish the results. When she refused to comply, Boots threatened to sue. The company relented only after several years, during which consumers continued to pay for the costly product.

You can also try “refuting” scientific results with bad evidence, the way the formaldehyde industry did:

To counter a study that found that formaldehyde caused cancer in rats, a formaldehyde company commissioned its own study. That study-which found no association between the chemical and cancer-exposed only one-third the number of rats to formaldehyde for half as long as the original study. A formaldehyde association quickly publicized the results and argued before the Consumer Product Safety Commission (CPSC) that they indicated “no chronic health effects from exposure to the level of formaldehyde normally encountered in the home”

And then, if you’re Pfizer, you can just generate as much favorable research as you like to bolster sales of a drug, despite your discovery that the drug increases risk of suicide:

From 1998 to 2007, Pfizer discreetly facilitated the publication of 15 case studies, six case reports, and nine letters to the editor to boost off-label use of Neurontin, a drug prescribed to treat seizures in people who have epilepsy and nerve pain. The number of patients taking the drug rose from 430,000 to 6 million, making it one of Pfizer’s most profitable products. An investigation found that Pfizer had failed to publish negative results, selectively reported outcomes, and excluded specific patients from analysis. [Most importantly] Pfizer failed to note that the drug increased the risk of suicide.

Read the full report here, which includes sources for these stories, as well as an extensive section devoted to reforming scientific practices. There are ways we can avoid this kind of corruption, and they involve everything from federal reforms to corporate transparency.

[via Union of Concerned Scientists]

Science, Journalism, and the Hype Cycle: My piece in tomorrow’s Wall Street Journal (Discovery Magazine)

I think one of the biggest struggles a science writer faces is how to accurately describe the promise of new research. If we start promising that a preliminary experiment is going to lead to a cure for cancer, we are treating our readers cruelly–especially the readers who have cancer. On the other hand, scoffing at everything is not a sensible alternative, because sometimes preliminary experiments really do lead to great advances. In the 1950s, scientists discovered that bacteria can slice up virus DNA to avoid getting sick. That discovery led, some 30 years later, to biotechnology–to an industry that enabled, among other things, bacteria to produce human insulin.

This challenge was very much on my mind as I recently read two books, which I review in tomorrow’s Wall Street Journal. One is on gene therapy–a treatment that inspired wild expectations in the 1990s, then crashed, and now is coming back. The other is epigenetics, which seems to me to be in the early stages of the hype cycle. You can read the essay in full here. [see post below]

March 9th, 2012 5:33 PM by Carl Zimmer

Hope, Hype and Genetic Breakthroughs (Wall Street Journal)

By CARL ZIMMER

I talk to scientists for a living, and one of my most memorable conversations took place a couple of years ago with an engineer who put electrodes in bird brains. The electrodes were implanted into the song-generating region of the brain, and he could control them with a wireless remote. When he pressed a button, a bird singing in a cage across the lab would fall silent. Press again, and it would resume its song.

I could instantly see a future in which this technology brought happiness to millions of people. Imagine a girl blind from birth. You could implant a future version of these wireless electrodes in the back of her brain and then feed it images from a video camera.

As a journalist, I tried to get the engineer to explore what seemed to me to be the inevitable benefits of his research. To his great credit, he wouldn’t. He wasn’t even sure his design would ever see the inside of a human skull. There were just too many ways for it to go wrong. He wanted to be very sure that I understood that and that I wouldn’t claim otherwise. “False hope,” he warned me, “is a sinful thing.”

EPEGINE1

Stephen Voss. Gene therapy allowed this once-blind dog to see again.

Over the past two centuries, medical research has yielded some awesome treatments: smallpox wiped out with vaccines, deadly bacteria thwarted by antibiotics, face transplants. But when we look back across history, we forget the many years of failure and struggle behind each of these advances.

This foreshortened view distorts our expectations for research taking place today. We want to believe that every successful experiment means that another grand victory is weeks away. Big stories appear in the press about the next big thing. And then, as the years pass, the next big thing often fails to materialize. We are left with false hope, and the next big thing gets a reputation as the next big lie.

In 1995, a business analyst named Jackie Fenn captured this intellectual whiplash in a simple graph. Again and again, she had seen new advances burst on the scene and generate ridiculous excitement. Eventually they would reach what she dubbed the Peak of Inflated Expectations. Unable to satisfy their promise fast enough, many of them plunged into the Trough of Disillusionment. Their fall didn’t necessarily mean that these technologies were failures. The successful ones slowly emerged again and climbed the Slope of Enlightenment.

When Ms. Fenn drew the Hype Cycle, she had in mind dot-com-bubble technologies like cellphones and broadband. Yet it’s a good model for medical advances too. I could point to many examples of the medical hype cycle, but it’s hard to think of a better one than the subject of Ricki Lewis’s well-researched new book, “The Forever Fix”: gene therapy.

The concept of gene therapy is beguilingly simple. Many devastating disorders are the result of mutant genes. The disease phenylketonuria, for example, is caused by a mutation to a gene involved in breaking down a molecule called phenylalanine. The phenylalanine builds up in the bloodstream, causing brain damage. One solution is to eat a low-phenylalanine diet for your entire life. A much more appealing alternative would be to somehow fix the broken gene, restoring a person’s metabolism to normal.

In “The Forever Fix,” Ms. Lewis chronicles gene therapy’s climb toward the Peak of Inflated Expectations over the course of the 1990s. A geneticist and the author of a widely used textbook, she demonstrates a mastery of the history, even if her narrative sometimes meanders and becomes burdened by clichés. She explains how scientists learned how to identify the particular genes behind genetic disorders. They figured out how to load genes into viruses and then to use those viruses to insert the genes into human cells.

EPEGINE2

Stephen Voss. Alisha Bacoccini is tested on her ability to read letters, at UPenn Hospital, in Philadelphia, PA on Monday, June 23, 2008. Bacoccini is undergoing an experimental gene therapy trial to improve her sight.

By 1999, scientists had enjoyed some promising successes treating people—removing white blood cells from leukemia patients, for example, inserting working genes, and then returning the cells to their bodies. Gene therapy seemed as if it was on the verge of becoming standard medical practice. “Within the next decade, there will be an exponential increase in the use of gene therapy,” Helen M. Blau, the then-director of the gene-therapy technology program at Stanford University, told Business Week.

Within a few weeks of Ms. Blau’s promise, however, gene therapy started falling straight into the Trough. An 18-year-old man named Jesse Gelsinger who suffered from a metabolic disorder had enrolled in a gene-therapy trial. University of Pennsylvania scientists loaded a virus with a working version of an enzyme he needed and injected it into his body. The virus triggered an overwhelming reaction from his immune system and within four days Gelsinger was dead.

Gene therapy nearly came to a halt after his death. An investigation revealed errors and oversights in the design of Gelsinger’s trial. The breathless articles disappeared. Fortunately, research did not stop altogether. Scientists developed new ways of delivering genes without triggering fatal side effects. And they directed their efforts at one part of the body in particular: the eye. The eye is so delicate that inflammation could destroy it. As a result, it has evolved physical barriers that keep the body’s regular immune cells out, as well as a separate battalion of immune cells that are more cautious in their handling of infection.

It occurred to a number of gene-therapy researchers that they could try to treat genetic vision disorders with a very low risk of triggering horrendous side effects of the sort that had claimed Gelsinger’s life. If they injected genes into the eye, they would be unlikely to produce a devastating immune reaction, and any harmful effects would not be able to spread to the rest of the body.

Their hunch paid off. In 2009 scientists reported their first success with gene therapy for a congenital disorder. They treated a rare form of blindness known as Leber’s congenital amaurosis. Children who were once blind can now see.

As “The Forever Fix” shows, gene therapy is now starting its climb up the Slope of Enlightenment. Hundreds of clinical trials are under way to see if gene therapy can treat other diseases, both in and beyond the eye. It still costs a million dollars a patient, but that cost is likely to fall. It’s not yet clear how many other diseases gene therapy will help or how much it will help them, but it is clearly not a false hope.

Gene therapy produced so much excitement because it appealed to the popular idea that genes are software for our bodies. The metaphor only goes so far, though. DNA does not float in isolation. It is intricately wound around spool-like proteins called histones. It is studded with caps made of carbon, hydrogen and oxygen atoms, known as methyl groups. This coiling and capping of DNA allows individual genes to be turned on and off during our lifetimes.

The study of this extra layer of control on our genes is known as epigenetics. In “The Epigenetics Revolution,” molecular biologist Nessa Carey offers an enlightening introduction to what scientists have learned in the past decade about those caps and coils. While she delves into a fair amount of biological detail, she writes clearly and compellingly. As Ms. Carey explains, we depend for our very existence as functioning humans on epigenetics. We begin life as blobs of undifferentiated cells, but epigenetic changes allow some cells to become neurons, others muscle cells and so on.

Epigenetics also plays an important role in many diseases. In cancer cells, genes that are normally only active in embryos can reawaken after decades of slumber. A number of brain disorders, such as autism and schizophrenia, appear to involve the faulty epigenetic programming of genes in neurons.

Scientists got their first inklings about epigenetics decades ago, but in the past few years the field has become hot. In 2008 the National Institutes of Health pledged $190 million to map the epigenetic “marks” on the human genome. New biotech start-ups are trying to carry epigenetic discoveries into the doctor’s office. The FDA has approved cancer drugs that alter the pattern of caps on tumor-cell DNA. Some studies on mice hint that it may be possible to treat depression by taking a pill that adjusts the coils of DNA in neurons.

People seem to be getting giddy about the power of epigenetics in the same way they got giddy about gene therapy in the 1990s. No longer is our destiny written in our DNA: It can be completely overwritten with epigenetics. The excitement is moving far ahead of what the science warrants—or can ever deliver. Last June, an article on the Huffington Post eagerly seized on epigenetics, woefully mangling two biological facts: one, that experiences can alter the epigenetic patterns in the brain; and two, that sometimes epigenetic patterns can be passed down from parents to offspring. The article made a ridiculous leap to claim that we can use meditation to change our own brains and the brains of our children—and thereby alter the course of evolution: “We can jump-start evolution and leverage it on our own terms. We can literally rewire our brains toward greater compassion and cooperation.” You couldn’t ask for a better sign that epigenetics is climbing the Peak of Inflated Expectations at top speed.

The title “The Epigenetics Revolution” unfortunately adds to this unmoored excitement, but in Ms. Carey’s defense, the book itself is careful and measured. Still, epigenetics will probably be plunging soon into the Trough of Disillusionment. It will take years to see whether we can really improve our health with epigenetics or whether this hope will prove to be a false one.

The Forever Fix

By Ricki LewisSt. Martin’s, 323 pages, $25.99

The Epigenetics Revolution

By Nessa CareyColumbia, 339 pages, $26.95

—Mr. Zimmer’s books include “A Planet of Viruses and Evolution: Making Sense of Life,” co-authored with Doug Emlen, to be published in July.

Nature journal criticizes Canadian ‘muzzling’ (CBC News)

Time for Canadian government to set its scientists free, magazine says

The Canadian Press

Posted: Mar 2, 2012 7:08 AM ET

Last Updated: Mar 2, 2012 12:54 PM ET

One of the world's leading scientific journals is criticizing the Harper government for 'muzzling' federal scientists

One of the world’s leading scientific journals is accusing the Harper government of limiting its scientists from speaking publicly about their research.

The journal, Nature, says in an editorial in this week’s issue that it’s time for the Canadian government to set its scientists free.

Nature says Canada is headed in the wrong direction in not letting its scientists speak out freely.Nature says Canada is headed in the wrong direction in not letting its scientists speak out freely. (Nature)It notes that Canada and the United States have undergone role reversals in the past six years.

It says the U.S. has adopted more open practices since the end of George W. Bush’s presidency, while Canada has gone in the opposite direction.

Nature says policy directives on government communications released through access to information requests reveal the Harper government has little understanding of the importance of the free flow of scientific knowledge.

Two weeks ago, the Canadian Science Writers’ Association, the World Federation of Science Journalists and several other groups sent an open letter to Harper, calling on him to unmuzzle federal scientists.

The letter cited a couple of high-profile examples, including one last fall when Environment Canada barred Dr. David Tarasick from speaking to journalists about his ozone layer research when it was published in Nature.

How did the KKK lose nearly one-third of its chapters in one year? (Slate)

Ku Klux Kontraction

By |Posted Thursday, March 8, 2012, at 4:55 PM ET

57886367

Members of the Fraternal White Knights of the Ku Klux Klan participate in the 11th Annual Nathan Bedford Forrest Birthday march July 11, 2009 in Pulaski, Tenn.Spencer Platt/Getty Images

The number of hate groups in the United States is on the rise, but the Ku Klux Klan is losing chapters, according to data released on Wednesday by the Southern Poverty Law Center. The number of KKK chapters dropped from 221 to 152 in just one year. Why is the Klan shrinking?

Consolidation and defections. The Klan is not a stable organization. There’s no real national leadership, and chapters are constantly appearing, disappearing, splitting, and merging. In 2010, to take one example, the True Invisible Empire Knights of Pulaski, Tenn., merged with the Traditional American Knights from Potosi, Mo. to form the True Invisible Empire Traditionalist American Knights of the Ku Klux Klan. (Note: this link, like others in this article, leads to an extremist website.) Such mergers decrease the number of chapters without necessarily changing membership totals. Not all the Klan’s losses are just on paper, though. Jeremy Parker, who led the Ohio-based Brotherhood of Klans, left the KKK for the Aryan Nations in 2010 and likely took a significant number of members with him. The Brotherhood of Klans was the second-largest Klan association in the country, with 38 chapters.

Membership totals are hard to track, because the Klan doesn’t willingly release member lists. Over the long term, the KKK is clearly contracting, since its rolls have shrunk from millions in the 1920s to between 3,000 and 5,000 today. But no one knows how membership has changed in the last few years.

Klan-watchers, however, suspect that the nation’s oldest domestic terrorist organization is indeed struggling to keep pace with other racist hate groups. Young racists tend to think of the Klan as their grandfathers’ hate group, and of its members as rural, uneducated, and technologically unsophisticated. The Klan doesn’t seem to have used the web and social media as well as its competitors. The group’s failure to effectively deploy technology is a bit of an irony, since one of those newfangled motion pictures, The Birth of a Nation, launched the KKK’s second era in 1915.

The Klan’s history of violence is another challenge to recruitment. The organization will always be associated with the lynching of innocent African-Americans in the 20th century, which puts off more moderate racists.

The KKK is also suffering from a proliferation of competitors. People who wanted to join a white supremacist movement back in the 1920s didn’t have a lot of choices. Today, there are countless options, enabling an extremist to find a group that matches his personal brand of intolerance. The more extreme groups in the burgeoning patriot movement cater to anti-Muslim, homophobic, and xenophobic sentiment, with less animosity toward African-Americans and Jews. Aryan Nations offers a heavy focus on Christian identity. Some groups preach more violence, while others offer a veneer of intellectualism.American Renaissance, for example, caters to “suit-and-tie” racists, offering pseudo-scientific papers on white supremacy. The group even holds conferences at a hotel near Dulles airport in Virginia.

Many young racist activists aren’t bothering to join groups at all anymore, further hampering the Klan’s recruitment efforts. Former KKK Grand Wizard Don Black in 1995 launched the website Stormfront, which enables individuals in the white supremacist movement to share ideas and read news stories reported from a racist perspective. The community-building site, and others like it, lessens the need for racists to socialize at Klan barbecues or introduce their children to Klanta Klaus at the KKK Christmas rally.

Number of U.S. Hate Groups Is Rising, Report Says (N.Y. Times)

By KIM SEVERSON – Published: March 7, 2012

ATLANTA — Fed by antagonism toward President Obama, resentment toward changing racial demographics and the economic rift between rich and poor, the number of so-called hate groups and antigovernment organizations in the nation has continued to grow, according to a report released Wednesday by the Southern Poverty Law Center.

The center, which has kept track of such groups for 30 years, recorded 1,018 hate groups operating last year.

The number of groups whose ideology is organized against specific racial, religious, sexual or other characteristics has risen steadily since 2000, when 602 were identified, the center said. Antigay groups, for example, have risen to 27 from 17 in 2010.

The report also described a “stunning” rise in the number of groups it identifies as part of the so-called patriot and militia movements, whose ideologies include deep distrust of the federal government.

In 2011, the center tracked 1,274 of those groups, up from 824 the year before.

“They represent both a kind of right-wing populist rage and a left-wing populist rage that has gotten all mixed up in anger toward the government,” said Mark Potok of the Southern Poverty Law Center and the author of the report.

The center, based in Montgomery, Ala., records only groups that are active, meaning that the groups are registering members, passing out fliers, protesting or showing other signs of activity beyond maintaining a Web site.

The Occupy movement is not on the list because its participants as a collective do not meet the center’s criteria for an extremist group, he said.

One of the groups that was moved from the “patriot” list to the hate group list this year is the Georgia Militia, some of whose members were indicted last year in a failed plot to blow up government buildings and spread poison along Atlanta freeways. They were reclassified because their speech includes anti-Semitism.

The far-right patriot movement gained steam in 1994 after the government used violence to shut down groups at Ruby Ridge, Idaho, and Waco, Tex. It peaked after the 1995 Oklahoma City bombing and began to fade. Its rise began anew in 2008, after the election of Mr. Obama and the beginning of the recession.

There have been declines in some hate groups, including native extremist groups like the Militiamen, which focused on illegal immigration. Chapters of the Ku Klux Klan fell to 152, from 221.

Among the states with the most active hate groups were California, Florida, Georgia, New Jersey and New York. The federal government does not focus on groups that engage in hate-based speech, but rather monitors paramilitary groups and others that have shown some indication of violence, said Daryl Johnson, a former senior domestic terrorism analyst for the Department of Homeland Security.

The Justice Department does not comment on the center’s annual report, but a spokeswoman said the agency had increased prosecution of hate crimes by 35 percent during the first three years of Mr. Obama’s presidency.

A version of this article appeared in print on March 8, 2012, on page A17 of the New York edition with the headline: Number of U.S. Hate Groups Is Rising, Report Says.

Could Many Universities Follow Borders Bookstores Into Oblivion? (The Chronicle of Higher Education)

March 7, 2012, 7:44 pm
By Marc Parry

Atlanta — Higher education’s spin on the Silicon Valley garage. That was the vision laid out in September, when the Georgia Institute of Technology announced a new lab for disruptive ideas, the Center for 21st Century Universities. During a visit to Atlanta last week, I checked in to see how things were going, sitting down with Richard A. DeMillo, the center’s director and Georgia Tech’s former dean of computing, and Paul M.A. Baker, the center’s associate director. We talked about challenges and opportunities facing colleges at a time of economic pain and technological change—among them the chance that many universities might follow Borders Bookstores into oblivion.

Q. You recently wrote that universities are “bystanders” at the revolution happening around them, even as they think they’re at the center of it. How so?

Mr. DeMillo: It’s the same idea as the news industry. Local newspapers survived most of the last century on profits from classified ads. And what happened? Craigslist drove profits out of classified ads for local newspapers. If you think that it’s all revolving around you, and you’re going to be able to impose your value system on this train that’s leaving the station, that’s going to lead you to one set of decisions. Think of Carnegie Mellon, with its “Four Courses, Millions of Users” idea [which became the Open Learning Initiative], or Yale with the humanities courses, thinking that what the market really wants is universal access to these four courses at the highest quality. And really what the market is doing is something completely different. The higher-education market is reinventing what a university is, what a course is, what a student is, what the value is. I don’t know why anyone would think that the online revolution is about reproducing the classroom experience.

Q. So what is the revolution about?

Mr. DeMillo: You don’t know where events are going to take higher education. But if you want to be an important institution 20 years from now, you have to position yourself so that you can adapt to whatever those technology changes are. Whenever you have this kind of technological change, where there’s a large incumbency, the incumbents are inherently at a disadvantage. And we’re the incumbents.

Q. What are some of the most important changes happening now?

Mr. DeMillo: What you’re seeing, for example, is technology enabling a single master teacher to reach students on an individualized basis on a scale that is unprecedented. So when Sebastian Thrun offers his Intro to Robotics course and gets 150,000 students—that’s a big deal.

Why is it a big deal? Well, because people who want to learn robotics want to learn from the master. And there’s something about the medium that he uses that makes that connection intimate. It’s not the same kind of connection that you get by pointing a camera at the front of the room and letting someone write on a whiteboard. These guys have figured out how to design a way of explaining the material that connects with people at scale. So Stanford all of a sudden becomes a place with a network of stakeholders that’s several orders of magnitude larger than it was 10 years ago. Every one of those students in India that wants to connect to Stanford now—connect to a mentor—now has a way to connect by bypassing their local institutions. Every institution that can’t offer a robotics course now has a way of offering a robotics course.

I think what you see happening now with the massive open courses is going to fundamentally change the business models. It’s going to put the notion of value front and center. Why would I want a credential from this university? Why would I want to pay tuition to this university? It really ups the stakes.

Mr. Baker: There used to be something called Borders, you may remember. Think of Borders, the bookstore, “X, Y, Z University,” the bookstore. If you’ve got Amazon as an analogue for these massively open courses, there is still a model where people actually go into bookstores because sometimes they want to touch, or they like hanging out, or there’s other value offered by that. What it means is that the university needs to rethink what it’s doing, how it’s doing it.

And how it innovates in a way of surviving in the face of this. If I can do the Amazon equivalent of this open course, why should I come here? Well, maybe you shouldn’t. And that’s a client that is lost.

Mr. DeMillo: All you have to do is add up the amount of money spent on courses. Just take an introduction to computer science. Add up the amount of money that’s spent nationwide on introductory programming courses. It’s a big number, I’ll bet. What is the value received for that spend? If, in fact, there’s a large student population that can be served by a higher-quality course, what’s the argument for spending all that money on 6,000 introduction to programming courses?

Q. You really think that many universities could go the way of Borders?

Mr. DeMillo: Yeah. Well, you can see it already. We lost, in this university system, four institutions this year.

Mr. Baker: The University System of Georgia merged four institutions into other ones that were geographically within 50 miles. The programs essentially were replicated. And in an environment in which you’ve got reduced resources, you can’t afford to have essentially identical programs 50 miles apart.

Q. So what sort of learning landscape do you think might emerge?

Mr. DeMillo: One thing that you might see is highly tuned curricula, students being able to select from a range of things that they want to learn and a range of mentors that they want to interact with, whether you think of it as hacking degrees or pulling assessments from a menu of different universities. What does that mean for the individual university? It means that a university has to figure out where its true value sits in that landscape.

Mr. Baker: Another thing we’re looking at is development of a value index to try to calculate, to be vulgar, the return on investment. Our idea is to try to figure out ways of determining what constitutes value for a student, based on four or five personas. So for, let’s say, a mom returning at 50 who wants an education—she’s going to value certain things differently than a 17-year-old rocket scientist coming to Tech who wants to get through in three years and knows exactly what she wants to do.

Mr. Demillo: Jeff Selingo wrote a column about this, having one place to go to figure out the economic value of a degree from a university. It’s a great idea, but why focus only on the paycheck as an economic value? There are lots of indicators of value. Do students from this university go to graduate school by a disproportionately large number? Do they get fellowships? Are they people who stay in their profession for a long period of time? You start to build up a picture of what students tell you, of what alumni tell you, was the value of that education. Can we pull these metrics together and then say something interesting about our institution and by extension others?

Q. What other projects is your center working on right now?

Mr. DeMillo: The Khan Academy—small bursts of knowledge that may or may not be included in a curriculum—was a really interesting idea.

Can students generate this kind of material in a way that’s useful for other students? That’s the genesis of our TechBurstcompetition [in which students create short videos that explain a single topic].

It turns out there’s a lot of interest on the part of the students at Georgia Tech in teaching what they know to their peers. The interesting part of the project is the unexpected things that you get. We had a discussion yesterday about mistakes. This is student-generated stuff, so is it right? Not all the time. Which causes great angst on the part of traditionalists, because now we have Georgia Tech TechBurst video that has errors in it. If these were instructional videos that we were marketing, that would be a very big deal. But they’re not. They’re the start of a thread of conversation among students. There’s one on gerrymandering. So it’s a political-science video, it’s cutely produced, but in some sense it’s not exactly right. And so what you would expect is now other students will come along and annotate that video, and say, well, that’s not exactly what gerrymandering is. And you’ll start to see this students-teaching-students peer-tutoring process taking place in real time.

Q. What about the massive open online course Georgia Tech will run in the fall?

Mr. DeMillo: The idea of a massive open course is something that people normally apply to introductory courses. What happens when you look at a massive open advanced seminar? A seminar room with 10,000 students, 50,000 students—what does that even mean? We’ve got some people here that have been blogging for quite a while about advanced topics. In fact, one of the blogs—Godel’s Lost Letter, by Professor Dick Lipton of Georgia Tech, and Ken Regan of the University at Buffalo—is about advanced computer theory, so it’s a very mathematical blog. It’s in the top 0.1 percent of WordPress blogs. A typical day is 5,000 to 10,000 page views. A hot day is 100,000. The question is can we take this blogging format and turn it into an online seminar.

Q. How would that work?

Mr. DeMillo: The blog is essentially an expression of a master teacher’s understanding of a field to people that want to learn about it. We think that there are some very simple layers that can be built under the existing blogging format that can essentially turn it into a massive open online seminar. It’s also a way of conducting scientific research. When you think about what happens in this blog, it celebrates the process of scientific discovery. I’ll just give you one example. Last year about this time some industrial scientist claimed that he had solved one of the outstanding problems in this area. In the normal course of events, the scientist would have written up the paper, would have sent it to a conference. It would have been refereed. Nine months later the paper would have been presented at the conference. People would have talked about it. It would have been written up to submit to a journal. Refereeing would have taken a couple of years for that. Well, the paper got submitted to Lipton’s blog. It just caused a flurry of activity. So thousands and thousands of scientists flocked to this paper, and essentially speeded up the refereeing of the paper, shortening the time from five years to a couple of weeks. It turns out that people came to believe that the claim was not valid, and the paper was incorrect. But what an education for future research students. You get to see the process of scientific discovery in action.

This is an interesting bookend to the idea of a massive open course. Because the people that are thinking about the massive open online courses for introductory material have a set of considerations. Students are at different levels of achievement. Assessment is very important. The credentialing process is dictated by whether or not you want credit. If you go to the other end of the curriculum, and say, well, what happens when we try to do these advanced courses at scale, credentialing is completely different. Assessment is completely different. You can’t rely on the same automation that you could in the introductory courses. Social networks become extremely important if you’re going to do this stuff at scale, because one professor can’t deal with 100,000 readers. He has to have a network of trusted people who would be able to answer questions. The anticipation is that a whole new set of problems would come up with these kinds of courses.

This conversation has been edited and shortened.

The Importance Of Mistakes (NPR)

February 28, 2012
by ADAM FRANK

It takes a lot of cabling to make the Oscillation Project with Emulsion-Racking Apparatus (OPERA) run at the Gran Sasso National Laboratory (LNGS) in Italy.Alberto Pizzoli/AFP/Getty Images. It takes a lot of cabling to make the Oscillation Project with Emulsion-Racking Apparatus (OPERA) run at the Gran Sasso National Laboratory (LNGS) in Italy.

How do people handle the discovery of their own mistake? Some folks might shrug it off. Some folks might minimize its effect. Some folks might even step in with a lie. Most people, we hope, would admit the mistake. But how often do we expect them to announce it to the world from a hilltop. How often do we expect them to tell us — in the clearest language possible — that they screwed up, providing every detail possible about the nature of the mistake?

That’s exactly what’s required in science. As embarrassing as it might seem to most people, admitting a mistake is really the essence of scientific heroism.

Which brings us, first, to faster-than-light neutrinos and then to climate science.

Last week rumors began to circulate that the (potential) discovery of neutrinos traveling faster than the speed of light may get swept into the dustbin of scientific history. The news (rumors really) first circulated via Science Insider.

“According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer.”

Oops.

The story goes on to say that once the cable was tightened the Einstein-busting result disappeared. While “sources familiar with the experiment” might not seem enough to start singing funeral dirges, (who was the source, Deep Neutrino?), CERN released its own statement that points in a similar direction. No one can say for sure yet, but it appears that the faster-than-light hoopla is likely to go away.

So what are we to make of this? A loose cable seems pretty lame on the face of it. “Dude, Everybody with a cable box and a 32-inch flat screen knows you got to check the cable!”

There is no doubt that, as mistakes go, researchers running the neutrino experiments would rather have something a bit more sexy to offer if their result was disproven. (How about tiny corrections due to seismic effects?) Still, I’m betting the OPERA experiment had a heck of a lot more cables than your TV so, perhaps, we should be more understanding.

More importantly, no matter how it happens making mistakes is exactly what scientists are supposed to do. “Our whole problem is to make mistakes as fast possible,” John Wheeler once said.

What make science so powerful is not just the admission of mistakes but also the detailing of mistakes. While the OPERA group might now wish they had waited a bit longer to make their announcement, there is no shame in the mistake in-and-of itself. If they step into the spotlight and tell the world what happened, then they deserve to be counted as heroes just as much as if they’d broken Einstein’s theory.

And that is where we can see the connection to climate, evolution and all the other fronts in the ever-expanding war on science. Last week at the AAAS meeting in Vancouver, Nina Fedoroff, a distinguished agricultural scientist and president of that body, made a bold and frightening statement (especially for someone in such a position of authority). Fedoroff told her audience, as The Guardian reported:

“‘We are sliding back into a dark era,’ she said. ‘And there seems little we can do about it. I am profoundly depressed at just how difficult it has become merely to get a realistic conversation started on issues such as climate change or genetically modified organisms.'”

See video: http://bcove.me/ajmi39pd

The spectacle of watching politicians fall over each other to distance themselves from research validated by armies of scientists is more than depressing. Our current understanding of climate, for example, represents the work of thousands of human beings all working to make mistakes as fast possible, all working to root out error as fast as possible. There is no difference between what happens in climate science or evolutionary biology and any other branch of science.

Honest people asking the best of themselves push forward in their own fields. They watch their work and those of their colleagues closely, always looking for mistakes, cracks in reasoning, subtle flaws in logic. When they are found, the process is set in motion: critique, defend, critique, root out. When science deniers trot out the same tired talking points, talking points with no scientific validity, they ignore (or fail to understand) their argument’s lack of credibility.

Eventually, science always finds its mistakes. Eventually we find some kind of truth, unless, of course, mistakes are forced on us from outside of science. That, however, is an error of another kind entirely.