Arquivo da tag: Mediação tecnológica

Robots Take Over Economy: Sudden Rise of Global Ecology of Interacting Robots Trade at Speeds Too Fast for Humans (Science Daily)

Sep. 11, 2013 — Recently, the global financial market experienced a series of computer glitches that abruptly brought operations to a halt. One reason for these “flash freezes” may be the sudden emergence of mobs of ultrafast robots, which trade on the global markets and operate at speeds beyond human capability, thus overwhelming the system. The appearance of this “ultrafast machine ecology” is documented in a new study published on September 11 in Nature Scientific Reports.

Typical ultrafast extreme events caused by mobs of computer algorithms operating faster than humans can react. (Credit: Neil Johnson, University of Miami)

The findings suggest that for time scales less than one second, the financial world makes a sudden transition into a cyber jungle inhabited by packs of aggressive trading algorithms. “These algorithms can operate so fast that humans are unable to participate in real time, and instead, an ultrafast ecology of robots rises up to take control,” explains Neil Johnson, professor of physics in the College of Arts and Sciences at the University of Miami (UM), and corresponding author of the study.

“Our findings show that, in this new world of ultrafast robot algorithms, the behavior of the market undergoes a fundamental and abrupt transition to another world where conventional market theories no longer apply,” Johnson says.

Society’s push for faster systems that outpace competitors has led to the development of algorithms capable of operating faster than the response time for humans. For instance, the quickest a person can react to potential danger is approximately one second. Even a chess grandmaster takes around 650 milliseconds to realize that he is in trouble — yet microchips for trading can operate in a fraction of a millisecond (1 millisecond is 0.001 second).

In the study, the researchers assembled and analyzed a high-throughput millisecond-resolution price stream of multiple stocks and exchanges. From January, 2006, through February, 2011, they found 18,520 extreme events lasting less than 1.5 seconds, including both crashes and spikes.

The team realized that as the duration of these ultrafast extreme events fell below human response times, the number of crashes and spikes increased dramatically. They created a model to understand the behavior and concluded that the events were the product of ultrafast computer trading and not attributable to other factors, such as regulations or mistaken trades. Johnson, who is head of the inter-disciplinary research group on complexity at UM, compares the situation to an ecological environment.

“As long as you have the normal combination of prey and predators, everything is in balance, but if you introduce predators that are too fast, they create extreme events,” Johnson says. “What we see with the new ultrafast computer algorithms is predatory trading. In this case, the predator acts before the prey even knows it’s there.”

Johnson explains that in order to regulate these ultrafast computer algorithms, we need to understand their collective behavior. This is a daunting task, but is made easier by the fact that the algorithms that operate below human response times are relatively simple, because simplicity allows faster processing.

“There are relatively few things that an ultrafast algorithm will do,” Johnson says. “This means that they are more likely to start adopting the same behavior, and hence form a cyber crowd or cyber mob which attacks a certain part of the market. This is what gives rise to the extreme events that we observe,” he says. “Our math model is able to capture this collective behavior by modeling how these cyber mobs behave.”

In fact, Johnson believes this new understanding of cyber-mobs may have other important applications outside of finance, such as dealing with cyber-attacks and cyber-warfare.

Journal Reference:

  1. Neil Johnson, Guannan Zhao, Eric Hunsader, Hong Qi, Nicholas Johnson, Jing Meng, Brian Tivnan. Abrupt rise of new machine ecology beyond human response time.Scientific Reports, 2013; 3 DOI: 10.1038/srep02627

Anthropology and the Anthropocene (Anthropology News)

Anthropology and Environment Society

September 2013

Amelia Moore

“The Anthropocene” is a label that is gaining popularity in the natural sciences.  It refers to the pervasive influence of human activities on planetary systems and biogeochemical processes.   Devised by Earth scientists, the term is poised to formally end the Holocene Epoch as the geological categorization for Earth’s recent past, present, and indefinite future.  The term is also poised to become the informal slogan of a revitalized environmental movement that has been plagued by popular indifference in recent years.

Climate change is the most well known manifestation of anthropogenic global change, but it is only one example of an Anthropocene event.  Other examples listed by the Earth sciences include biodiversity loss, changes in planetary nutrient cycling, deforestation, the hole in the ozone layer, fisheries decline, and the spread of invasive species.  This change is said to stem from the growth of the human population and the spread of resource intensive economies since the Industrial Revolution (though the initial boundary marker is in dispute with some scientists arguing for the Post-WWII era and others for the advent of agriculture as the critical tipping point).  Whatever the boundary, the Anthropocene signifies multiple anthropological opportunities.

What stance should we, as anthropologists, take towards the Anthropocene? I argue that there are two (and likely more), equally valid approaches to the Anthropocene: anthropology in the Anthropocene and anthropology of the Anthropocene.  Anthropology in the Anthropocene already exists in the form of climate ethnography and work that documents the lived experience of global environmental change.  Arguably, ethnographies of protected areas and transnational conservation strategies exemplify this field as well.  Anthropology in the Anthropocene is characterized by an active concern for the detrimental affects of anthropogenesis on populations and communities that have been marginalized to bear the brunt of global change impacts or who have been haphazardly caught up in global change solution strategies.  This work is engaged with environmental justice and oriented towards political action.

Anthropology of the Anthropocene is much smaller and less well known than anthropology in the Anthropocene, but it will be no less crucial.  Existing work in this vein includes those who take a critical stance towards climate science and politics as social processes with social consequences.  Beyond deconstruction, these critical scholars investigate what forms scientific and political assemblages create and how they participate in remaking the world anew.  Other existing research in this mode interrogates the idea of biodiversity and the historical and cultural context for the notion of anthropogenesis itself.  In the near future, we will see more work that can enquire into both the sociocultural and socioecological implications and manifestations of Anthropocene discourse, practice and logic.

I have only created cursory sketches of anthropology in the Anthropocene and anthropology of the Anthropocene here.  However, these modes are not at all mutually exclusive, and they should inspire many possibilities for future work.  The centrality of anthropos, the idea of the human, within the logics of the Anthropocene is an invitation for anthropology to renew its engagements with the natural sciences in research collaborations and as the object of research, especially the ecological and Earth sciences.

For starters, we should consider the implications of the Anthropocene idea for our understandings of history and collectivity.  If the natural world is finally gaining recognition within the authoritative sciences as intimately interconnected with human life such that these two worlds cease to be separate arenas of thought and action or take on different salience, then both the Humanities and the natural sciences need to devise more appropriate modes of analysis that can speak to emergent socioecologies.  This has begun in anthropology with some recent works of environmental health studies, political ecology, and multispecies ethnography, but is still in its infancy.

In terms of opportunities for legal and political engagement, the Anthropocene signifies possibilities for reconceptualizing environmentalism, conservation and development.  Anthropologists should be cognizant of new design paradigms and models for organizing socioecological collectives from the urban to the small island to the riparian.  We should also be on the lookout for new political collaborations and publics creating conversations utilizing multiple avenues for communication in the academic realm and beyond.  Emergent asymmetries in local and transnational markets and the formation of new multi-sited assemblages of governance should be of special importance.

In terms of science, the Anthropocene signals new horizons for studying and participating in global change science.  The rise of interdisciplinary socioecology, the biosciences of coupled natural and human complexity, geoengineering and the biotech interest in de-extinction are just a sampling of important transformations in research practices, research objects, and the shifting boundaries between the lab and the field.  Ongoing scientific reorientation will continue to yield new arguments about emergent forms of life that will participate in the creation of future assemblages, publics, and movements.

I would also like to caution against potentially unhelpful uses of the Anthropocene idea.  The term should not become a brand signifying a specific style of anthropological research.  It should not gloss over rigid solidifications of time, space, the human, or life.  We should not celebrate creativity in the Anthropocene while ignoring instances of stark social differentiation and capital accumulation, just as we should not focus on Anthropocene assemblages as only hegemonic in the oppressive sense.   Further, we should be cautious with our utilization of the crisis rhetoric surrounding events in the Anthropocene, recognizing that crisis for some can be turned into multiple forms of opportunity for others.  Finally, we must admit the possibility that the Anthropocene may not succeed in gaining lasting traction through formal designation or popularization, and we should not overstate its significance by assuming its universal acceptance.

In the next year, the Section News Column of the Anthropology and Environment Society will explore news, events, projects, and arguments from colleagues and students experimenting with various framings of the Anthropocene in addition to its regular content.  If you would like to contribute to this column, please contact Amelia Moore at a.moore4@miami.edu.

– See more at: http://www.anthropology-news.org/index.php/2013/09/09/anthropology-and-the-anthropocene/#sthash.vBo1RtuY.dpuf

The Myth of ‘Environmental Catastrophism’ (Monthly Review)

Between October 2010 and April 2012, over 250,000 people, including 133,000 children under five, died of hunger caused by drought in Somalia. Millions more survived only because they received food aid. Scientists at the UK Met Centre have shown that human-induced climate change made this catastrophe much worse than it would otherwise have been.1

This is only the beginning: the United Nations’ 2013 Human Development Report says that without coordinated global action to avert environmental disasters, especially global warming, the number of people living in extreme poverty could increase by up to 3 billion by 2050.2 Untold numbers of children will die, killed by climate change.

If a runaway train is bearing down on children, simple human solidarity dictates that anyone who sees it should shout a warning, that anyone who can should try to stop it. It is difficult to imagine how anyone could disagree with that elementary moral imperative.

And yet some do. Increasingly, activists who warn that the world faces unprecedented environmental danger are accused of catastrophism—of raising alarms that do more harm than good. That accusation, a standard feature of right-wing attacks on the environmental movement, has recently been advanced by some left-wing critics as well. While they are undoubtedly sincere, their critique of so-called environmental catastrophism does not stand up to scrutiny.

From the Right…

The word “catastrophism” originated in nineteenth-century geology, in the debate between those who believed all geological change had been gradual and those who believed there had been episodes of rapid change. Today, the word is most often used by right-wing climate change deniers for whom it is a synonym for “alarmism.”

  • The Heartland Institute: “Climate Catastrophism Picking Up Again in the U.S. and Across the World.”3
  • A right-wing German blog: “The Climate Catastrophism Cult.”4
  • The Australian journal Quadrant: “The Chilling Costs of Climate Catastrophism.”5

Examples could be multiplied. As environmental historian Franz Mauelshagen writes, “In climate denialist circles, the word ‘climate catastrophe’ has become synonymous with ‘climate lie,’ taking the anthropogenic green house effect for a scam.”6

Those who hold such views like to call themselves “climate change skeptics,” but a more accurate term is “climate science deniers.” While there are uncertainties about the speed of change and its exact effects, there is no question that global warming is driven by greenhouse-gas emissions caused by human activity, and that if business as usual continues, temperatures will reach levels higher than any seen since before human beings evolved. Those who disagree are not skeptical, they are denying the best scientific evidence and analysis available.

The right labels the scientific consensus “catastrophism” to belittle environmentalism, and to stifle consideration of measures to delay or prevent the crisis. The real problem, they imply, is not the onrushing train, but the people who are yelling “get off the track!” Leaving the track would disrupt business as usual, and that is to be avoided at all costs.

…And From the Left

Until very recently, “catastrophism” as a political expression was pretty much the exclusive property of conservatives. When it did occur in left-wing writing, it referred to economic debates, not ecology. But in 2007 two quite different left-wing voices almost simultaneously adopted “catastrophism” as a pejorative term for radical ideas about climate change they disagreed with.

The most prominent was the late Alexander Cockburn, who in 2007 was writing regularly forThe Nation and coediting the newsletter CounterPunch. To the shock of many of his admirers, he declared that “There is still zero empirical evidence that anthropogenic production of CO2 is making any measurable contribution to the world’s present warming trend,” and that “the human carbon footprint is of zero consequence.”7 Concern about climate change was, he wrote, the result of a conspiracy “between the Greenhouser fearmongers and the nuclear industry, now largely owned by oil companies.”8

Like critics on the right, Cockburn charged that the left was using climate change to sneak through reforms it could not otherwise win: “The left has bought into environmental catastrophism because it thinks that if it can persuade the world that there is indeed a catastrophe, then somehow the emergency response will lead to positive developments in terms of social and environmental justice.”9

While Cockburn’s assault on “environmental catastrophism” was shocking, his arguments did not add anything new to the climate debate. They were the same criticisms we had long heard from right-wing deniers, albeit with leftish vocabulary.

That was not the case with Leo Panitch and Colin Leys. These distinguished Marxist scholars are by no means deniers. They began their preface to the 2007 Socialist Register by noting that “environmental problems might be so severe as to potentially threaten the continuation of anything that might be considered tolerable human life,” and insisting that “the speed of development of globalized capitalism, epitomized by the dramatic acceleration of climate change, makes it imperative for socialists to deal seriously with these issues now.”

But then they wrote: “Nonetheless, it is important to try to avoid an anxiety-driven ecological catastrophism, parallel to the kind of crisis-driven economic catastrophism that announces the inevitable demise of capitalism.”10 They went on to argue that capitalism’s “dynamism and innovativeness” might enable it to use “green commerce” to escape environmental traps.

The problem with the Panitch–Leys formulation is that the threat of ecological catastrophe isnot “parallel” to the view that capitalism will destroy itself. The desire to avoid the kind of mechanical determinism that has often characterized Marxist politics, where every crisis was proclaimed to be the final battle, led these thoughtful writers to confuse two very different kinds of catastrophe.

The idea that capitalism will inevitably face an insurmountable economic crisis and collapse is based on a misunderstanding of Marxist economic theory. While economic crises are endemic to capitalism, the system can always continue—only class struggle, only a social revolution, can overthrow capitalism and end the crisis cycle.

Large-scale environmental damage is caused by our destructive economic system, but its effectis the potentially irreversible disruption of essential natural systems. The most dramatic example is global warming: recent research shows that the earth is now warmer than at any time in the past 6,000 years, and temperatures are rising much faster than at any time since the last Ice Age. Arctic ice and the Greenland ice sheet are disappearing faster than predicted, raising the specter of flooding in coastal areas where more than a billion people live. Extreme weather events, such as giant storms, heat waves, and droughts are becoming ever more frequent. So many species are going extinct that many scientists call it a mass extinction event, comparable to the time 66 million years ago when 75 percent of all species, including the dinosaurs, were wiped out.

As the editors of Monthly Review wrote in reply to Socialist Register, if these trends continue, “we will be faced with a different world—one in which life on the planet will be massively degraded on a scale not seen for tens of millions of years.”11 To call this “anxiety-driven ecological catastrophism, parallel to…economic catastrophism” is to equate an abstract error in economic theory with some of the strongest conclusions of modern science.

A New ‘Catastrophism’ Critique

Now a new essay, provocatively titled “The Politics of Failure Have Failed,” offers a different and more sweeping left-wing critique of “environmental catastrophism.” Author Eddie Yuen is associated with the Pacifica radio program Against the Grain, and is on the editorial board of the journal Capitalism Nature Socialism.

His paper is part of a broader effort to define and critique a body of political thought calledCatastrophism, in a book by that title.12 In the book’s introduction, Sasha Lilley offers this definition:

Catastrophism presumes that society is headed for a collapse, whether economic, ecological, social, or spiritual. This collapse is frequently, but not always, regarded as a great cleansing, out of which a new society will be born. Catastrophists tend to believe that an ever-intensified rhetoric of disaster will awaken the masses from their long slumber—if the mechanical failure of the system does not make such struggles superfluous. On the left, catastrophism veers between the expectation that the worse things become, the better they will be for radical fortunes, and the prediction that capitalism will collapse under its own weight. For parts of the right, worsening conditions are welcomed, with the hope they will trigger divine intervention or allow the settling of scores for any modicum of social advance over the last century.

A political category that includes both the right and the left—and that encompasses people whose concerns might be economic, ecological, social, or spiritual—is, to say the least, unreasonably broad. It is difficult to see any analytical value in a definition that lumps together anarchists, fascists, Christian fundamentalists, right-wing conspiracy nuts, pre–1914 socialists, peak-oil theorists, obscure Trotskyist groups, and even Mao Zedong.

The definition of catastrophism becomes even more problematic in Yuen’s essay.

One Of These Things Is Not Like The Others…

Years ago, the children’s television program Sesame Street would display four items—three circles and a square, three horses and a chair, and so on—while someone sang, “One of these things is not like the others, One of these things doesn’t belong.

I thought of that when I read Yuen’s essay.

While the book’s scope is broad, most of it focuses, as Yuen writes, on “instrumental, spurious, and sometimes maniacal versions of catastrophism—including rightwing racial paranoia, religious millenarianism, liberal panics over fascism, leftist fetishization of capitalist collapse, capitalist invocation of the ‘shock doctrine’ and pop culture cliché.”

But as Yuen admits in his first paragraph, environmentalism is a very different matter, because we are in “what is unquestionably a genuine catastrophic moment in human and planetary history…. Of all of the forms of catastrophic discourse on offer, the collapse of ecological systems is unique in that it is definitively verified by a consensus within the scientific community…. It is absolutely urgent to address this by effectively and rapidly changing the direction of human society.”

If the science is clear, if widespread ecological collapse unquestionably faces us unless action is taken, why is this topic included in a book devoted to criticizing false ideas? Does it make sense to use the same term for people who believe in an imaginary train crash and for people who are trying to stop a real crash from happening?

The answer, although he does not say so, is that Yuen is using a different definition than the one Lilley gave in her introduction. Her version used the word for the belief that some form of catastrophe will have positive results—that capitalism will collapse from internal contradictions, that God will punish all sinners, that peak oil or industrial collapse will save the planet. Yuen uses the same word for the idea that environmentalists should alert people to the threat of catastrophic environmental change and try to mobilize them to prevent or minimize it.

Thus, when he refers to “a shrill note of catastrophism” in the work of James Hansen, perhaps the world’s leading climate scientist, he is not challenging the accuracy of Hansen’s analysis, but only the “narrative strategy” of clearly stating the probable results of continuing business as usual.

Yuen insists that “the veracity of apocalyptic claims about ecological collapse are separate from their effects on social, political, and economic life.” Although “the best evidence points to cascading environmental disaster,” in his view it is self-defeating to tell people that. He makes two arguments, which we can label “practical” and “principled.”

His practical argument is that by talking about “apocalyptic scenarios” environmentalists have made people more apathetic, less likely to fight for progressive change. His principledargument is that exposing and campaigning to stop tendencies towards environmental collapse has “damaging and rightward-leaning effects”—it undermines the left, promotes reactionary policies and strengthens the ruling class.

In my opinion, he is wrong on both counts.

The Truth Shall Make You Apathetic?

In Yuen’s view, the most important question facing people who are concerned about environmental destruction is: “what narrative strategies are most likely to generate effective and radical social movements?”

He is vague about what “narrative strategies” might work, but he is very firm about what does not. He argues that environmentalists have focused on explaining the environmental crisis and warning of its consequences in the belief that this will lead people to rise up and demand change, but this is a fallacy. In reality, “once convinced of apocalyptic scenarios, many Americans become more apathetic.”

Given such a sweeping assertion, it is surprising to find that the only evidence Yuen offers is a news release describing one academic paper, based on a U.S. telephone survey conducted in 2008, that purported to show that “more informed respondents both feel less personally responsible for global warming, and also show less concern for global warming.”13

Note first that being “more informed” is not the same as being “convinced of apocalyptic scenarios” or being bombarded with “increasingly urgent appeals about fixed ecological tipping points.” On the face of it, this study does not appear to contribute to our understanding of the effects of “catastrophism.”

What’s more, reading the original paper reveals that the people described as “more informed” were self-reporting. If they said they were informed, that was accepted, and no one asked if they were listening to climate scientists or to conservative talk radio. That makes the paper’s conclusion meaningless.

Later in his essay, Yuen correctly criticizes some environmentalists and scientists who “speak of ‘everyone’ as a unified subject.” But here he accepts as credible a study that purports to show how all Americans respond to information about climate change, regardless of class, gender, race, or political leanings.

The problem with such undifferentiated claims is shown in a 2011 study that examined the impact of Americans’ political opinions on their feelings about climate change. It found that liberals and Democrats who report being well-informed are more worried about climate change, while conservatives and Republicans who report being well-informed are less worried.14 Obviously the two groups mean very different things by “well-informed.”

Even if we ignore that, the study Yuen cites is a one-time snapshot—it does not tells us what radicals really need to know, which is how things are changing. For that, a more useful survey is one that scientists at Yale University and George Mason University have conducted seven times since 2008 to show shifts in U.S. public opinion.15 Based on answers to questions about their opinions, respondents are categorized according to their attitude towards global warming. The surveys show:

  • The number of people identified as “Disengaged” or “Cautious”—those we might call apathetic or uncertain—has varied very little, accounting for between 31 percent and 35 percent of the respondents every time.
  • The categories “Dismissive” or “Doubtful”—those who lean towards denial—increased between 2008 and 2010. Since then, those groups have shrunk back almost to the 2008 level.
  • In parallel, the combined “Concerned” and “Alarmed” groups shrank between 2008 and 2010, but have since largely recovered. In September 2012—before Hurricane Sandy!—there were more than twice as many Americans in these two categories as in Dismissive/Doubtful.

Another study, published in the journal Climatic Change, used seventy-four independent surveys conducted between 2002 and 2011 to create a Climate Change Threat Index (CCTI)—a measure of public concern about climate change—and showed how it changed in response to public events. It found that public concern about climate change reached an all-time high in 2006–2007, when the Al Gore documentary An Inconvenient Truth was seen in theaters by millions of people and won an Academy Award.

The authors conclude: “Our results…show that advocacy efforts produce substantial changes in public perceptions related to climate change. Specifically, the film An Inconvenient Truth and the publicity surrounding its release produced a significant positive jump in the CCTI.”16

This directly contradicts Yuen’s view that more information about climate change causes Americans to become more apathetic. There is no evidence of a long-term increase in apathy or decrease in concern—and when scientific information about climate change reached millions of people, the result was not apathy but a substantial increase in support for action to reduce greenhouse gas emissions.

‘The Two Greatest Myths’

Yuen says environmentalists have deluged Americans with catastrophic warnings, and this strategy has produced apathy, not action. Writing of establishment politicians who make exactly the same claim, noted climate change analyst Joseph Romm says, “The two greatest myths about global warming communications are 1) constant repetition of doomsday messages has been a major, ongoing strategy and 2) that strategy doesn’t work and indeed is actually counterproductive!” Contrary to liberal mythology, the North American public has not been exposed to anything even resembling the first claim. Romm writes,

The broad American public is exposed to virtually no doomsday messages, let alone constant ones, on climate change in popular culture (TV and the movies and even online)…. The major energy companies bombard the airwaves with millions and millions of dollars of repetitious pro-fossil-fuel ads. The environmentalists spend far, far less money…. Environmentalists when they do appear in popular culture, especially TV, are routinely mocked…. It is total BS that somehow the American public has been scared and overwhelmed by repeated doomsday messaging into some sort of climate fatigue.17

The website Daily Climate, which tracks U.S. news stories about climate change, says coverage peaked in 2009, during the Copenhagen talks—but then it “fell off the map,” dropping 30 percent in 2010 and another 20 percent in 2011. In 2012, despite widespread droughts and Hurricane Sandy, news coverage fell another 2 percent. The decline in editorial interest was even more dramatic—in 2012 newspapers published fewer than half as many editorials about climate change as they did in 2009.18

It should be noted that these shifts occurred in the framework of very limited news coverage of climate issues. As a leading media analyst notes, “relative to other issues like health, medicine, business, crime and government, media attention to climate change remains a mere blip.”19 Similarly, a British study describes coverage of climate change in newspapers there as “lamentably thin”—a problem exacerbated by the fact that much of the coverage consists of “worryingly persistent climate denial stories.” The author concludes drily: “The limited coverage is unlikely to have convinced readers that climate change is a serious problem warranting immediate, decisive and potentially costly action.”20

Given Yuen’s concern that Americans do not recognize the seriousness of environmental crises, it is surprising how little he says about the massive fossil-fuel-funded disinformation campaigns that have confused and distorted media reporting. I can find just four sentences on the subject in his 9,000-word text, and not one that suggests denialist campaigns might have helped undermine efforts to build a climate change movement.

On the contrary, he downplays the influence of “the well-funded climate denial lobby,” by claiming that “far more corporate and elite energy has gone toward generating anxiety about global warming,” and that “mainstream climate science is much better funded.” He provides no evidence for either statement.

Of course, the fossil-fuel lobby is not the only force working to undermine public concern about climate change. It is also important to recognize the impact of Obama’s predictable unwillingness to confront the dominant forces in U.S. capitalism, and of the craven failure of mainstream environmentalist groups and NGOs to expose and challenge the Democrats’ anti-environmental policies.

With fossil-fuel denialists on one side, and Obama’s pale-green cheerleaders on the other, activists who want to get out the truth have barely been heard. In that context, it makes little sense to blame environmentalists for sabotaging environmentalism.

The Truth Will Help the Right?

Halfway through his essay, Yuen abruptly changes direction, leaving the practical argument behind and raising his principled concern. He now argues that what he calls catastrophism leads people to support reactionary policies and promotes “the most authoritarian solutions at the state level.” Focusing attention on what he agrees is a “cascading environmental disaster” is dangerous because it “disables the left but benefits the right and capital.” He says, “Increased awareness of environmental crisis will not likely translate into a more ecological lifestyle, let alone an activist orientation against the root causes of environmental degradation. In fact, right-wing and nationalist environmental politics have much more to gain from an embrace of catastrophism.”

Yuen says that many environmentalists, including scientists, “reflexively overlook class divisions,” and so do not realize that “some business and political elites feel that they can avoid the worst consequences of the environmental crisis, and may even be able to benefit from it.” Yuen apparently thinks those elites are right—while the insurance industry is understandably worried about big claims, he says, “the opportunities for other sectors of capitalism are colossal in scope.”

He devotes much of the rest of his essay to describing the efforts of pro-capitalist forces, conservative and liberal, to use concern about potential environmental disasters to promote their own interests, ranging from emissions trading schemes to military expansion to Malthusian attacks on the world’s poorest people. “The solution offered by global elites to the catastrophe is a further program of austerity, belt-tightening, and sacrifice, the brunt of which will be borne by the world’s poor.”

Some of this is overstated. His claim that “Malthusianism is at the core of most environmental discourse,” reflects either a very limited view of environmentalism or an excessively broad definition of Malthusianism. And he seems to endorse David Noble’s bizarre theory that public concern about global warming has been engineered by a corporate conspiracy to promote carbon trading schemes.21 Nevertheless he is correct that the ruling class will do its best to profit from concern about climate change, while simultaneously offloading the costs onto the world’s poorest people.

The question is, who is he arguing with? This book says it aims to “spur debate among radicals,” but none of this is new or controversial for radicals. The insight that the interests of the ruling class are usually opposed to the interests of the rest of us has been central to left-wing thought since before Marx was born. Capitalists always try to turn crises to their advantage no matter who gets hurt, and they always try to offload the costs of their crises onto the poor and oppressed.

What needs to be proved is not that pro-capitalist forces are trying to steer the environmental movement into profitable channels, and not that many sincere environmentalists have backward ideas about the social and economic causes of ecological crises. Radicals who are active in green movements know those things perfectly well. What needs to be proved is Yuen’s view that warning about environmental disasters and campaigning to prevent them has “damaging and rightward-leaning effects” that are so severe that radicals cannot overcome them.

But no proof is offered.

What is particularly disturbing about his argument is that he devotes pages to describing the efforts of reactionaries to misdirect concern about climate change—and none to the efforts of radical environmentalists to counter those forces. Earlier in his essay, he mentioned that “environmental and climate justice perspectives are steadily gaining traction in internal environmental debates,” but those thirteen words are all he has to say on the subject.

He says nothing about the historic 2010 Cochabamba Conference, where 30,000 environmental activists from 140 countries warned that if greenhouse gas emissions are not stopped, “the damages caused to our Mother Earth will be completely irreversible”—a statement Yuen would doubtless label “catastrophist.” Far from succumbing to apathy or reactionary policies, the participants explicitly rejected market solutions, identified capitalism as the cause of the crisis, and outlined a radical program to transform the global economy.

He is equally silent about the campaign against the fraudulent “green economy” plan adopted at last year’s Rio+20 conference. One of the principal organizers of that opposition is La Via Campesina, the world’s largest organization of peasants and farmers, which warns that the world’s governments are “propagating the same capitalist model that caused climate chaos and other deep social and environmental crises.”

His essay contains not a word about Idle No More, or Occupy, or the Indigenous-led fight against Canada’s tar sands, or the anti-fracking and anti-coal movements. By omitting them, Yuen leaves the false impression that the climate movement is helpless to resist reactionary forces.

Contrary to Yuen’s title, the effort to build a movement to save the planet has not failed. Indeed, Catastrophism was published just four months before the largest U.S. climate change demonstration ever!

The question before radicals is not what “narrative strategy” to adopt, but rather, how will we relate to the growing environmental movement? How will we support its goals while strengthening the forces that see the need for more radical solutions?

What Must Be Done?

Yuen opposes attempts to build a movement around rallies, marches, and other mass protests to get out the truth and to demand action against environmental destruction. He says that strategy worked in the 1960s, when Americans were well-off and naïve, but cannot be replicated in today’s “culture of atomized cynicism.”

Like many who know that decade only from history books or as distant memories, Yuen foreshortens the experience: he knows about the mass protests and dissent late in the decade, but ignores the many years of educational work and slow movement building in a deeply reactionary and racist time. It is not predetermined that the campaign against climate change will take as long as those struggles, or take similar forms, but the real experience of the 1960s should at least be a warning against premature declarations of failure.

Yuen is much less explicit about what he thinks would be an effective strategy, but he cites as positive examples the efforts of some to promote “a bottom-up and egalitarian transition” by:

ever-increasing numbers of people who are voluntarily engaging in intentional communities, sustainability projects, permaculture and urban farming, communing and mil­itant resistance to consumerism…we must consider the alterna­tive posed by the highly imaginative Italian left of the twentieth century. The explosively popular Slow Food movement was origi­nally built on the premise that a good life can be had not through compulsive excess but through greater conviviality and a shared commonwealth.

Compare that to this list of essential tasks, prepared recently by Pablo Solón, a leading figure in the global climate justice movement:

To reduce greenhouse gas emissions to a level that avoids catastrophe, we need to:

* Leave more than two-thirds of the fossil fuel reserves under the soil;

* Stop the exploitation of tar sands, shale gas and coal;

* Support small, local, peasant and indigenous community farming while we dismantle big agribusiness that deforests and heats the planet;

* Promote local production and consumption of products, reducing the free trade of goods that send millions of tons of CO2 while they travel around the world;

* Stop extractive industries from further destroying nature and contaminating our atmosphere and our land;

* Increase significantly public transport to reduce the unsustainable “car way of life”;

* Reduce the emissions of warfare by promoting genuine peace and dismantling the military and war industry and infrastructure.22

The projects that Yuen describes are worthwhile, but unless the participants are alsocommitted to building mass environmental campaigns, they will not be helping to achieve the vital objectives that Solón identifies. Posing local communes and slow food as alternatives to building a movement against global climate change is effectively a proposal to abandon the fight against capitalist ecocide in favor of creating greenish enclaves, while the world burns.

Bright-siding versus Movement Building

Whatever its merits in other contexts, it is not helpful or appropriate to use the wordcatastrophism as a synonym for telling the truth about the environmental dangers we face. Using the same language as right-wing climate science deniers gives the impression that the dangers are non-existent or exaggerated. Putting accurate environmental warnings in the same category as apocalyptic Christian fundamentalism and century-old misreadings of Marxist economic theory leads to underestimation of the threats we face and directs efforts away from mobilizing an effective counterforce.

Yuen’s argument against publicizing the scientific consensus on climate change echoes the myth that liberal politicians and journalists use to justify their failure to challenge the crimes of the fossil-fuel industry. People are tired of all that doom and gloom, they say. It is time for positivemessages! Or, to use Yuen’s vocabulary, environmentalists need to end “apocalyptic rhetoric” and find better “narrative strategies.”

This is fundamentally an elitist position: the people cannot handle the truth, so a knowledgeable minority must sugarcoat it, to make the necessary changes palatable.

David Spratt of the Australian organization Climate Code Red calls that approach “bright-siding,” a reference to the bitterly satirical Monty Python song, “Always Look on the Bright Side of Life.”

The problem is, Spratt writes: “If you avoid including an honest assessment of climate science and impacts in your narrative, it’s pretty difficult to give people a grasp about where the climate system is heading and what needs to be done to create the conditions for living in climate safety, rather than increasing and eventually catastrophic harm.”23 Joe Romm makes the same point: “You’d think it would be pretty obvious that the public is not going to be concerned about an issue unless one explains why they should be concerned.”24

Of course, this does not mean that we only need to explain the science. We need to propose concrete goals, as Pablo Solón has done. We need to show how the scientific consensus about climate change relates to local and national concerns such as pipelines, tar sands, fracking, and extreme weather. We need to work with everyone who is willing to confront any aspect of the crisis, from people who still have illusions about capitalism to convinced revolutionaries. Activists in the wealthy countries must be unstinting in their political and practical solidarity with the primary victims of climate change, indigenous peoples, and impoverished masses everywhere.

We need to do all of that and more.

But the first step is to tell the truth—about the danger we face, about its causes, and about the measures that must be taken to turn back the threat. In a time of universal deceit, telling the truth is a revolutionary act.

Notes

  1.  Fraser C. Lott, Nikolaos Christidis, and Peter A. Stott, “Can the 2011 East African Drought Be Attributed to Human-Induced Climate Change?,” Geophysical Research Letters 40, no. 6 ( March 2013): 1177–81.
  2.  UNDP, “’Rise of South’ Transforming Global Power Balance, Says 2013 Human Development Report,” March 14, 2013, http://undp.org.
  3.  Tom Harris, “Climate Catastrophism Picking Up Again in the U.S. and Across the World,”Somewhat Reasonable, October 10, 2012 http://blog.heartland.org.
  4.  Pierre Gosselin, “The Climate Catastrophism Cult,” NoTricksZone, February 12, 2011, http://notrickszone.com.
  5.  Ray Evans, “The Chilling Costs of Climate Catastrophism,” Quadrant Online, June 2008. http://quadrant.org.au.
  6.  Franz Mauelshagen, “Climate Catastrophism: The History of the Future of Climate Change,” in Andrea Janku, Gerrit Schenk, and Franz Mauelshagen, Historical Disasters in Context: Science, Religion, and Politics (New York: Routledge, 2012), 276.
  7.  Alexander Cockburn, “Is Global Warming a Sin?,” CounterPunch, April 28–30, 2007, http://counterpunch.org.
  8.  Alexander Cockburn, “Who are the Merchants of Fear?,” CounterPunch, May 12–14, 2007, http:// counterpunch.org.
  9.  Alexander Cockburn, “I Am An Intellectual Blasphemer,” Spiked Review of Books, January 9, 2008, http://spiked-online.com.
  10.  Leo Panitch and Colin Leys, “Preface,” Socialist Register 2007: Coming to Terms With Nature(London: Merlin Press/Monthly Review Press, 2006), ix–x.
  11. 11.“Notes from the Editors,” Monthly Review 58, no. 10 (March 2007), http://monthlyreview.org.
  12.  Sasha Lilley, David McNally, Eddie Yuen, and James Davis, Catastrophism: The Apocalyptic Politics of Collapse and Rebirth (Oakland: PM Press, 2012).
  13.  Yuen’s footnote cites an article which is identical to a news release issued the previous day by Texas A&M University; see “Increased Knowledge About Global Warming Leads to Apathy, Study Shows,” Science Daily, March 28, 2008, http://eurekalert.org. The original paper, which Yuen does not cite, is: P.M. Kellstedt, S. Zahran, and A. Vedlitz, “Personal Efficacy, the Information Environment, and Attitudes Towards Global Warming and Climate Change in the United State,”Risk Analysis 28, no. 1 (2008): 113–26.
  14.  Aaron M. McCright and Riley E. Dunlap, “The Politicization of Climate Change and Polarization in the American Public’s Views of Global Warming, 2001–2010,” The Sociological Quarterly 52 (2011): 155–94.
  15.  A. Leiserowitz, et. al., Global Warming’s Six Americas, September 2012 (New Haven, CT: Yale Project on Climate Change Communication, 2013), http://environment.yale.edu.
  16.  Robert J. Brulle, Jason Carmichael, and J. Craig Jenkins, “Shifting Public Opinion on Climate Change: An Empirical Assessment of Factors Influencing Concern Over Climate Change in the U.S., 2002–2010,” Climatic Change 114, no. 2 (September 2012): 169–88.
  17.  Joe Romm, “Apocalypse Not: The Oscars, The Media and the Myth of ‘Constant Repetition of Doomsday Messages’ on Climate,” Climate Progress, February 24, 2013, http://thinkprogress.org.
  18.  Douglas Fischer. “2010 in Review: The Year Climate Coverage ‘Fell off the Map,’” Daily Climate, January 3, 2011. http://dailyclimate.org; “Climate Coverage Down Again in 2011,” Daily Climate, January 3, 2012, http://dailyclimate.org; “Climate Coverage, Dominated by Weird Weather, Falls Further in 2012,” Daily Climate, January 2, 2013, http://dailyclimate.org.
  19.  Maxwell T. Boykoff, Who Speaks for the Climate?: Making Sense of Media Reporting on Climate Change (Cambridge: Cambridge University Press, 2011), 24.
  20.  Neil T. Gavin, “Addressing Climate Change: A Media Perspective,” Environmental Politics 18, no. 5 (September 2009): 765–80.
  21.  Two responses to David Noble are: Derrick O’Keefe, “Denying Time and Place in the Global Warming Debate,” Climate & Capitalism, June 7, 2007, http://climateandcapitalism.com; Justin Podur, “Global Warming Suspicions and Confusions,” ZNet, May 11, 2007, http://zcommunications.org.
  22.  Pablo Solón, “A Contribution to the Climate Space 2013: How to Overcome the Climate Crisis?,”Climate Space, March 14, 2013, http://climatespace2013.wordpress.com.
  23.  David Spratt, Always Look on the Bright Side of Life: Bright-siding Climate Advocacy and Its Consequences, April 2012, http://climatecodered.org.
  24.  Joe Romm, “Apocalypse Not.”

Ian Angus is editor of the online journal Climate & Capitalism. He is co-author of Too Many People? Population, Immigration, and the Environmental Crisis(Haymarket, 2011), and editor of The Global Fight for Climate Justice(Fernwood, 2010).
He would like to thank Simon Butler, Martin Empson, John Bellamy Foster, John Riddell, Javier Sethness, and Chris Williams for comments and suggestions.

Rising Seas (Nat Geo)

Picture of Seaside Heights, New Jersey, after Hurricane Sandy

As the planet warms, the sea rises. Coastlines flood. What will we protect? What will we abandon? How will we face the danger of rising seas?

By Tim Folger

Photographs by George Steinmetz

September 2013

By the time Hurricane Sandy veered toward the Northeast coast of the United States last October 29, it had mauled several countries in the Caribbean and left dozens dead. Faced with the largest storm ever spawned over the Atlantic, New York and other cities ordered mandatory evacuations of low-lying areas. Not everyone complied. Those who chose to ride out Sandy got a preview of the future, in which a warmer world will lead to inexorably rising seas.

Brandon d’Leo, a 43-year-old sculptor and surfer, lives on the Rockaway Peninsula, a narrow, densely populated, 11-mile-long sandy strip that juts from the western end of Long Island. Like many of his neighbors, d’Leo had remained at home through Hurricane Irene the year before. “When they told us the tidal surge from this storm would be worse, I wasn’t afraid,” he says. That would soon change.

D’Leo rents a second-floor apartment in a three-story house across the street from the beach on the peninsula’s southern shore. At about 3:30 in the afternoon he went outside. Waves were crashing against the five-and-a-half-mile-long boardwalk. “Water had already begun to breach the boardwalk,” he says. “I thought, Wow, we still have four and a half hours until high tide. In ten minutes the water probably came ten feet closer to the street.”

Back in his apartment, d’Leo and a neighbor, Davina Grincevicius, watched the sea as wind-driven rain pelted the sliding glass door of his living room. His landlord, fearing the house might flood, had shut off the electricity. As darkness fell, Grincevicius saw something alarming. “I think the boardwalk just moved,” she said. Within minutes another surge of water lifted the boardwalk again. It began to snap apart.

Three large sections of the boardwalk smashed against two pine trees in front of d’Leo’s apartment. The street had become a four-foot-deep river, as wave after wave poured water onto the peninsula. Cars began to float in the churning water, their wailing alarms adding to the cacophony of wind, rushing water, and cracking wood. A bobbing red Mini Cooper, its headlights flashing, became wedged against one of the pine trees in the front yard. To the west the sky lit up with what looked like fireworks—electrical transformers were exploding in Breezy Point, a neighborhood near the tip of the peninsula. More than one hundred homes there burned to the ground that night.

The trees in the front yard saved d’Leo’s house, and maybe the lives of everyone inside—d’Leo, Grincevicius, and two elderly women who lived in an apartment downstairs. “There was no option to get out,” d’Leo says. “I have six surfboards in my apartment, and I was thinking, if anything comes through the wall, I’ll try to get everyone on those boards and try to get up the block. But if we’d had to get in that water, it wouldn’t have been good.”

After a fitful night’s sleep d’Leo went outside shortly before sunrise. The water had receded, but thigh-deep pools still filled parts of some streets. “Everything was covered with sand,” he says. “It looked like another planet.”

A profoundly altered planet is what our fossil-fuel-driven civilization is creating, a planet where Sandy-scale flooding will become more common and more destructive for the world’s coastal cities. By releasing carbon dioxide and other heat-trapping gases into the atmosphere, we have warmed the Earth by more than a full degree Fahrenheit over the past century and raised sea level by about eight inches. Even if we stopped burning all fossil fuels tomorrow, the existing greenhouse gases would continue to warm the Earth for centuries. We have irreversibly committed future generations to a hotter world and rising seas.

In May the concentration of carbon dioxide in the atmosphere reached 400 parts per million, the highest since three million years ago. Sea levels then may have been as much as 65 feet above today’s; the Northern Hemisphere was largely ice free year-round. It would take centuries for the oceans to reach such catastrophic heights again, and much depends on whether we manage to limit future greenhouse gas emissions. In the short term scientists are still uncertain about how fast and how high seas will rise. Estimates have repeatedly been too conservative.

Global warming affects sea level in two ways. About a third of its current rise comes from thermal expansion—from the fact that water grows in volume as it warms. The rest comes from the melting of ice on land. So far it’s been mostly mountain glaciers, but the big concern for the future is the giant ice sheets in Greenland and Antarctica. Six years ago the Intergovernmental Panel on Climate Change (IPCC) issued a report predicting a maximum of 23 inches of sea-level rise by the end of this century. But that report intentionally omitted the possibility that the ice sheets might flow more rapidly into the sea, on the grounds that the physics of that process was poorly understood.

As the IPCC prepares to issue a new report this fall, in which the sea-level forecast is expected to be slightly higher, gaps in ice-sheet science remain. But climate scientists now estimate that Greenland and Antarctica combined have lost on average about 50 cubic miles of ice each year since 1992—roughly 200 billion metric tons of ice annually. Many think sea level will be at least three feet higher than today by 2100. Even that figure might be too low.

“In the last several years we’ve observed accelerated melting of the ice sheets in Greenland and West Antarctica,” says Radley Horton, a research scientist at Columbia University’s Earth Institute in New York City. “The concern is that if the acceleration continues, by the time we get to the end of the 21st century, we could see sea-level rise of as much as six feet globally instead of two to three feet.” Last year an expert panel convened by the National Oceanic and Atmospheric Administration adopted 6.6 feet (two meters) as its highest of four scenarios for 2100. The U.S. Army Corps of Engineers recommends that planners consider a high scenario of five feet.

One of the biggest wild cards in all sea-level-rise scenarios is the massive Thwaites Glacier in West Antarctica. Four years ago NASA sponsored a series of flights over the region that used ice-penetrating radar to map the seafloor topography. The flights revealed that a 2,000-foot-high undersea ridge holds the Thwaites Glacier in place, slowing its slide into the sea. A rising sea could allow more water to seep between ridge and glacier and eventually unmoor it. But no one knows when or if that will happen.“That’s one place I’m really nervous about,” says Richard Alley, a glaciologist at Penn State University and an author of the last IPCC report. “It involves the physics of ice fracture that we really don’t understand.” If the Thwaites Glacier breaks free from its rocky berth, that would liberate enough ice to raise sea level by three meters—nearly ten feet. “The odds are in our favor that it won’t put three meters in the ocean in the next century,” says Alley. “But we can’t absolutely guarantee that. There’s at least some chance that something very nasty will happen.”

Even in the absence of something very nasty, coastal cities face a twofold threat: Inexorably rising oceans will gradually inundate low-lying areas, and higher seas will extend the ruinous reach of storm surges. The threat will never go away; it will only worsen. By the end of the century a hundred-year storm surge like Sandy’s might occur every decade or less. Using a conservative prediction of a half meter (20 inches) of sea-level rise, the Organisation for Economic Co-operation and Development estimates that by 2070, 150 million people in the world’s large port cities will be at risk from coastal flooding, along with $35 trillion worth of property—an amount that will equal 9 percent of the global GDP. How will they cope?

“During the last ice age there was a mile or two of ice above us right here,” says Malcolm Bowman, as we pull into his driveway in Stony Brook, New York, on Long Island’s north shore. “When the ice retreated, it left a heap of sand, which is Long Island. All these rounded stones you see—look there,” he says, pointing to some large boulders scattered among the trees near his home. “They’re glacial boulders.”

Bowman, a physical oceanographer at the State University of New York at Stony Brook, has been trying for years to persuade anyone who will listen that New York City needs a harbor-spanning storm-surge barrier. Compared with some other leading ports, New York is essentially defenseless in the face of hurricanes and floods. London, Rotterdam, St. Petersburg, New Orleans, and Shanghai have all built levees and storm barriers in the past few decades. New York paid a high price for its vulnerability last October. Sandy left 43 dead in the city, of whom 35 drowned; it cost the city some $19 billion. And it was all unnecessary, says Bowman.

“If a system of properly designed storm-surge barriers had been built—and strengthened with sand dunes at both ends along the low-lying coastal areas—there would have been no flooding damage from Sandy,” he says.

Bowman envisions two barriers: one at Throgs Neck, to keep surges from Long Island Sound out of the East River, and a second one spanning the harbor south of the city. Gates would accommodate ships and tides, closing only during storms, much like existing structures in the Netherlands and elsewhere. The southern barrier alone, stretching five miles between Sandy Hook, New Jersey, and the Rockaway Peninsula, might cost $10 billion to $15 billion, Bowman estimates. He pictures a six-lane toll highway on top that would provide a bypass route around the city and a light-rail line connecting the Newark and John F. Kennedy Airports.

“It could be an asset to the region,” says Bowman. “Eventually the city will have to face up to this, because the problem is going to get worse. It might take five years of study and another ten years to get the political will to do it. By then there might have been another disaster. We need to start planning immediately. Otherwise we’re mortgaging the future and leaving the next generation to cope as best it can.”

Another way to safeguard New York might be to revive a bit of its past. In the 16th-floor loft of her landscape architectural firm in lower Manhattan, Kate Orff pulls out a map of New York Harbor in the 19th century. The present-day harbor shimmers outside her window, calm and unthreatening on an unseasonably mild morning three months to the day after Sandy hit.

“Here’s an archipelago that protected Red Hook,” Orff says, pointing on the map to a small cluster of islands off the Brooklyn shore. “There was another chain of shoals that connected Sandy Hook to Coney Island.”

The islands and shallows vanished long ago, demolished by harbor-dredging and landfill projects that added new real estate to a burgeoning city. Orff would re-create some of them, particularly the Sandy Hook–Coney Island chain, and connect them with sluice gates that would close during a storm, forming an eco-engineered barrier that would cross the same waters as Bowman’s more conventional one. Behind it, throughout the harbor, would be dozens of artificial reefs built from stone, rope, and wood pilings and seeded with oysters and other shellfish. The reefs would continue to grow as sea levels rose, helping to buffer storm waves—and the shellfish, being filter feeders, would also help clean the harbor. “Twenty-five percent of New York Harbor used to be oyster beds,” Orff says.

Orff estimates her “oystertecture” vision could be brought to life at relatively low cost. “It would be chump change compared with a conventional barrier. And it wouldn’t be money wasted: Even if another Sandy never happens, you’d have a cleaner, restored harbor in a more ecologically vibrant context and a healthier New York.”

In June, Mayor Michael Bloomberg outlined a $19.5 billion plan to defend New York City against rising seas. “Sandy was a temporary setback that can ultimately propel us forward,” he said. The mayor’s proposal calls for the construction of levees, local storm-surge barriers, sand dunes, oyster reefs, and more than 200 other measures. It goes far beyond anything planned by any other American city. But the mayor dismissed the idea of a harbor barrier. “A giant barrier across our harbor is neither practical nor affordable,” Bloomberg said. The plan notes that since a barrier would remain open most of the time, it would not protect the city from the inch-by-inch creep of sea-level rise.

Meanwhile, development in the city’s flood zones continues. Klaus Jacob, a geophysicist at Columbia University, says the entire New York metropolitan region urgently needs a master plan to ensure that future construction will at least not exacerbate the hazards from rising seas.

“The problem is we’re still building the city of the past,” says Jacob. “The people of the 1880s couldn’t build a city for the year 2000—of course not. And we cannot build a year-2100 city now. But we should not build a city now that we know will not function in 2100. There are opportunities to renew our infrastructure. It’s not all bad news. We just have to grasp those opportunities.”

Will New York grasp them after Bloomberg leaves office at the end of this year? And can a single storm change not just a city’s but a nation’s policy? It has happened before. The Netherlands had its own stormy reckoning 60 years ago, and it transformed the country.

The storm roared in from the North Sea on the night of January 31, 1953. Ria Geluk was six years old at the time and living where she lives today, on the island of Schouwen Duiveland in the southern province of Zeeland. She remembers a neighbor knocking on the door of her parents’ farmhouse in the middle of the night to tell them that the dike had failed. Later that day the whole family, along with several neighbors who had spent the night, climbed to the roof, where they huddled in blankets and heavy coats in the wind and rain. Geluk’s grandparents lived just across the road, but water swept into the village with such force that they were trapped in their home. They died when it collapsed.

“Our house kept standing,” says Geluk. “The next afternoon the tide came again. My father could see around us what was happening; he could see houses disappearing. You knew when a house disappeared, the people were killed. In the afternoon a fishing boat came to rescue us.”

In 1997 Geluk helped found the Watersnoodmuseum—the “flood museum”—on Schouwen Duiveland. The museum is housed in four concrete caissons that engineers used to plug dikes in 1953. The disaster killed 1,836 in all, nearly half in Zeeland, including a baby born on the night of the storm.

Afterward the Dutch launched an ambitious program of dike and barrier construction called the Delta Works, which lasted more than four decades and cost more than six billion dollars. One crucial project was the five-mile-long Oosterscheldekering, or Eastern Scheldt barrier, completed 27 years ago to defend Zeeland from the sea. Geluk points to it as we stand on a bank of the Scheldt estuary near the museum, its enormous pylons just visible on the horizon. The final component of the Delta Works, a movable barrier protecting Rotterdam Harbor and some 1.5 million people, was finished in 1997.

Like other primary sea barriers in the Netherlands, it’s built to withstand a 1-in-10,000-year storm—the strictest standard in the world. (The United States uses a 1-in-100 standard.) The Dutch government is now considering whether to upgrade the protection levels to bring them in line with sea-level-rise projections.

Such measures are a matter of national security for a country where 26 percent of the land lies below sea level. With more than 10,000 miles of dikes, the Netherlands is fortified to such an extent that hardly anyone thinks about the threat from the sea, largely because much of the protection is so well integrated into the landscape that it’s nearly invisible.

On a bitingly cold February afternoon I spend a couple of hours walking around Rotterdam with Arnoud Molenaar, the manager of the city’s Climate Proof program, which aims to make Rotterdam resistant to the sea levels expected by 2025. About 20 minutes into our walk we climb a sloping street next to a museum designed by the architect Rem Koolhaas. The presence of a hill in this flat city should have alerted me, but I’m surprised when Molenaar tells me that we’re walking up the side of a dike. He gestures to some nearby pedestrians. “Most of the people around us don’t realize this is a dike either,” he says. The Westzeedijk shields the inner city from the Meuse River a few blocks to the south, but the broad, busy boulevard on top of it looks like any other Dutch thoroughfare, with flocks of cyclists wheeling along in dedicated lanes.

As we walk, Molenaar points out assorted subtle flood-control structures: an underground parking garage designed to hold 10,000 cubic meters—more than 2.5 million gallons—of rainwater; a street flanked by two levels of sidewalks, with the lower one designed to store water, leaving the upper walkway dry. Late in the afternoon we arrive at Rotterdam’s Floating Pavilion, a group of three connected, transparent domes on a platform in a harbor off the Meuse. The domes, about three stories tall, are made of a plastic that’s a hundred times as light as glass.

Inside we have sweeping views of Rotterdam’s skyline; hail clatters overhead as low clouds scud in from the North Sea. Though the domes are used for meetings and exhibitions, their main purpose is to demonstrate the wide potential of floating urban architecture. By 2040 the city anticipates that as many as 1,200 homes will float in the harbor. “We think these structures will be important not just for Rotterdam but for many cities around the world,” says Bart Roeffen, the architect who designed the pavilion. The homes of 2040 will not necessarily be domes; Roeffen chose that shape for its structural integrity and its futuristic appeal. “To build on water is not new, but to develop floating communities on a large scale and in a harbor with tides—that is new,” says Molenaar. “Instead of fighting against water, we want to live with it.”

While visiting the Netherlands, I heard one joke repeatedly: “God may have built the world, but the Dutch built Holland.” The country has been reclaiming land from the sea for nearly a thousand years—much of Zeeland was built that way. Sea-level rise does not yet panic the Dutch.

“We cannot retreat! Where could we go? Germany?” Jan Mulder has to shout over the wind—we’re walking along a beach called Kijkduin as volleys of sleet exfoliate our faces. Mulder is a coastal morphologist with Deltares, a private coastal management firm. This morning he and Douwe Sikkema, a project manager with the province of South Holland, have brought me to see the latest in adaptive beach protection. It’s called the zandmotor—the sand engine.

The seafloor offshore, they explain, is thick with hundreds of feet of sand deposited by rivers and retreating glaciers. North Sea waves and currents once distributed that sand along the coast. But as sea level has risen since the Ice Age, the waves no longer reach deep enough to stir up sand, and the currents have less sand to spread around. Instead the sea erodes the coast here.

The typical solution would be to dredge sand offshore and dump it directly on the eroding beaches—and then repeat the process year after year as the sand washes away. Mulder and his colleagues recommended that the provincial government try a different strategy: a single gargantuan dredging operation to create the sandy peninsula we’re walking on—a hook-shaped stretch of beach the size of 250 football fields. If the scheme works, over the next 20 years the wind, waves, and tides will spread its sand 15 miles up and down the coast. The combination of wind, waves, tides, and sand is the zandmotor.

The project started only two years ago, but it seems to be working. Mulder shows me small dunes that have started to grow on a beach where there was once open water. “It’s very flexible,” he says. “If we see that sea-level rise increases, we can increase the amount of sand.” Sikkema adds, “And it’s much easier to adjust the amount of sand than to rebuild an entire system of dikes.”

Later Mulder tells me about a memorial inscription affixed to the Eastern Scheldt barrier in Zeeland: “It says, ‘Hier gaan over het tij, de maan, de wind, en wij—Here the tide is ruled by the moon, the wind, and us.’ ” It reflects the confidence of a generation that took for granted, as we no longer can, a reasonably stable world. “We have to understand that we are not ruling the world,” says Mulder. “We need to adapt.”

With the threats of climate change and sea-level rise looming over us all, cities around the world, from New York to Ho Chi Minh City, have turned to the Netherlands for guidance. One Dutch firm, Arcadis, has prepared a conceptual design for a storm-surge barrier in the Verrazano Narrows to protect New York City. The same company helped design a $1.1 billion, two-mile-long barrier that protected New Orleans from a 13.6-foot storm surge last summer, when Hurricane Isaac hit. The Lower Ninth Ward, which suffered so greatly during Hurricane Katrina, was unscathed.

“Isaac was a tremendous victory for New Orleans,” Piet Dircke, an Arcadis executive, tells me one night over dinner in Rotterdam. “All the barriers were closed; all the levees held; all the pumps worked. You didn’t hear about it? No, because nothing happened.”

New Orleans may be safe for a few decades, but the long-term prospects for it and other low-lying cities look dire. Among the most vulnerable is Miami. “I cannot envision southeastern Florida having many people at the end of this century,” says Hal Wanless, chairman of the department of geological sciences at the University of Miami. We’re sitting in his basement office, looking at maps of Florida on his computer. At each click of the mouse, the years pass, the ocean rises, and the peninsula shrinks. Freshwater wetlands and mangrove swamps collapse—a death spiral that has already started on the southern tip of the peninsula. With seas four feet higher than they are today—a distinct possibility by 2100—about two-thirds of southeastern Florida is inundated. The Florida Keys have almost vanished. Miami is an island.

When I ask Wanless if barriers might save Miami, at least in the short term, he leaves his office for a moment. When he returns, he’s holding a foot-long cylindrical limestone core. It looks like a tube of gray, petrified Swiss cheese. “Try to plug this up,” he says. Miami and most of Florida sit atop a foundation of highly porous limestone. The limestone consists of the remains of countless marine creatures deposited more than 65 million years ago, when a warm, shallow sea covered what is now Florida—a past that may resemble the future here.

A barrier would be pointless, Wanless says, because water would just flow through the limestone beneath it. “No doubt there will be some dramatic engineering feats attempted,” he says. “But the limestone is so porous that even massive pumping systems won’t be able to keep the water out.”

Sea-level rise has already begun to threaten Florida’s freshwater supply. About a quarter of the state’s 19 million residents depend on wells sunk into the enormous Biscayne aquifer. Salt water is now seeping into it from dozens of canals that were built to drain the Everglades. For decades the state has tried to control the saltwater influx by building dams and pumping stations on the drainage canals. These “salinity-control structures” maintain a wall of fresh water behind them to block the underground intrusion of salt water. To offset the greater density of salt water, the freshwater level in the control structures is generally kept about two feet higher than the encroaching sea.

But the control structures also serve a second function: During the state’s frequent rainstorms their gates must open to discharge the flood of fresh water to the sea.“We have about 30 salinity-control structures in South Florida,” says Jayantha Obeysekera, the chief hydrological modeler at the South Florida Water Management District. “At times now the water level in the sea is higher than the freshwater level in the canal.” That both accelerates saltwater intrusion and prevents the discharge of flood waters. “The concern is that this will get worse with time as the sea-level rise accelerates,” Obeysekera says.

Using fresh water to block the salt water will eventually become impractical, because the amount of fresh water needed would submerge ever larger areas behind the control structures, in effect flooding the state from the inside. “With 50 centimeters [about 20 inches] of sea-level rise, 80 percent of the salinity-control structures in Florida will no longer be functional,” says Wanless. “We’ll either have to drown communities to keep the freshwater head above sea level or have saltwater intrusion.” When sea level rises two feet, he says, Florida’s aquifers may be poisoned beyond recovery. Even now, during unusually high tides, seawater spouts from sewers in Miami Beach, Fort Lauderdale, and other cities, flooding streets.

In a state exposed to hurricanes as well as rising seas, people like John Van Leer, an oceanographer at the University of Miami, worry that one day they will no longer be able to insure—or sell—their houses. “If buyers can’t insure it, they can’t get a mortgage on it. And if they can’t get a mortgage, you can only sell to cash buyers,” Van Leer says. “What I’m looking for is a climate-change denier with a lot of money.”

Unless we change course dramatically in the coming years, our carbon emissions will create a world utterly different in its very geography from the one in which our species evolved. “With business as usual, the concentration of carbon dioxide in the atmosphere will reach around a thousand parts per million by the end of the century,” says Gavin Foster, a geochemist at the University of Southampton in England. Such concentrations, he says, haven’t been seen on Earth since the early Eocene epoch, 50 million years ago, when the planet was completely ice free. According to the U.S. Geological Survey, sea level on an iceless Earth would be as much as 216 feet higher than it is today. It might take thousands of years and more than a thousand parts per million to create such a world—but if we burn all the fossil fuels, we will get there.

No matter how much we reduce our greenhouse gas emissions, Foster says, we’re already locked in to at least several feet of sea-level rise, and perhaps several dozens of feet, as the planet slowly adjusts to the amount of carbon that’s in the atmosphere already. A recent Dutch study predicted that the Netherlands could engineer solutions at a manageable cost to a rise of as much as five meters, or 16 feet. Poorer countries will struggle to adapt to much less. At different times in different places, engineering solutions will no longer suffice. Then the retreat from the coast will begin. In some places there will be no higher ground to retreat to.

By the next century, if not sooner, large numbers of people will have to abandon coastal areas in Florida and other parts of the world. Some researchers fear a flood tide of climate-change refugees. “From the Bahamas to Bangladesh and a major amount of Florida, we’ll all have to move, and we may have to move at the same time,” says Wanless. “We’re going to see civil unrest, war. You just wonder how—or if—civilization will function. How thin are the threads that hold it all together? We can’t comprehend this. We think Miami has always been here and will always be here. How do you get people to realize that Miami—or London—will not always be there?”

What will New York look like in 200 years? Klaus Jacob, the Columbia geophysicist, sees downtown Manhattan as a kind of Venice, subject to periodic flooding, perhaps with canals and yellow water cabs. Much of the city’s population, he says, will gather on high ground in the other boroughs. “High ground will become expensive, waterfront will become cheap,” he says. But among New Yorkers, as among the rest of us, the idea that the sea is going to rise—a lot—hasn’t really sunk in yet. Of the thousands of people in New York State whose homes were badly damaged or destroyed by Sandy’s surge, only 10 to 15 percent are expected to accept the state’s offer to buy them out at their homes’ pre-storm value. The rest plan to rebuild.

Is War Really Disappearing? New Analysis Suggests Not (Science Daily)

Aug. 29, 2013 — While some researchers have claimed that war between nations is in decline, a new analysis suggests we shouldn’t be too quick to celebrate a more peaceful world.

The study finds that there is no clear trend indicating that nations are less eager to wage war, said Bear Braumoeller, author of the study and associate professor of political science at The Ohio State University.

Conflict does appear to be less common than it had been in the past, he said. But that’s due more to an inability to fight than to an unwillingness to do so.

“As empires fragment, the world has split up into countries that are smaller, weaker and farther apart, so they are less able to fight each other,” Braumoeller said.

“Once you control for their ability to fight each other, the proclivity to go to war hasn’t really changed over the last two centuries.”

Braumoeller presented his research Aug. 29 in Chicago at the annual meeting of the American Political Science Association.

Several researchers have claimed in recent years that war is in decline, most notably Steven Pinker in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.

As evidence, Pinker points to a decline in war deaths per capita. But Braumoeller said he believes that is a flawed measure.

“That accurately reflects the average citizen’s risk from death in war, but countries’ calculations in war are more complicated than that,” he said.

Moreover, since population grows exponentially, it would be hard for war deaths to keep up with the booming number of people in the world.

Because we cannot predict whether wars will be quick and easy or long and drawn-out (“Remember ‘Mission Accomplished?'” Braumoeller says) a better measure of how warlike we as humans are is to start with how often countries use force — such as missile strikes or armed border skirmishes — against other countries, he said.

“Any one of these uses of force could conceivably start a war, so their frequency is a good indication of how war prone we are at any particular time,” he said.

Braumoeller used the Correlates of War Militarized Interstate Dispute database, which scholars from around the world study to measure uses of force up to and including war.

The data shows that the uses of force held more or less constant through World War I, but then increased steadily thereafter.

This trend is consistent with the growth in the number of countries over the course of the last two centuries.

But just looking at the number of conflicts per pair of countries is misleading, he said, because countries won’t go to war if they aren’t “politically relevant” to each other.

Military power and geography play a big role in relevance; it is unlikely that a small, weak country in South America would start a war with a small, weak country in Africa.

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years.

While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war.

“With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn’t get credit for being more peaceful just because we’re not as able fight as we once were,” he said.

“There is no indication that we actually have less proclivity to wage war.”

Hello, Hal (New Yorker)

Will we ever get a computer we can really talk to?

BY 

JUNE 23, 2008

The challenge is to marry our two greatest technologies: language and toolmaking.

The challenge is to marry our two greatest technologies: language and toolmaking.

Not long ago, a caller dialled the toll-free number of an energy company to inquire about his bill. He reached an interactive-voice-response system, or I.V.R.—the automated service you get whenever you dial a utility or an airline or any other big American company. I.V.R.s are the speaking tube that connects corporate America to its clients. Companies profess boundless interest in their customers, but they don’t want to pay an employee to talk to a caller if they can avoid it; the average human-to-human call costs the company at least five dollars. Once an I.V.R. has been paid for, however, a human-to-I.V.R. call costs virtually nothing.

“If you have an emergency, press one,” the utility company’s I.V.R. said. “To use our automated services or to pay by phone, press two.”

The caller punched two, and was instructed to enter his account number, which he did. An alert had been placed on the account because of a missed payment. “Please hold,” the I.V.R. said. “Your call is being transferred to a service representative.” This statement was followed by one of the most commonly heard sentences in the English language: “Your call may be monitored.”

In fact, the call was being monitored, and I listened to it some months later, in the offices of B.B.N. Technologies, a sixty-year-old company, in Cambridge, Massachusetts. Joe Alwan, a vice-president and the general manager of the division that makes B.B.N.’s “callerexperience analytics” software, which is called Avoke, was showing me how the technology can automatically create a log of events in a call, render the speech as text, and make it searchable.

Alwan, a compact man with scrunchedtogether features who has been at B.B.N. for two years, spoke rapidly but smoothly, with a droll delivery. He projected a graphic of the voice onto a screen at one end of the room. “Anger’s the big one,” he said. Companies can use Avoke to determine when their callers are getting angry, so that they can improve their I.V.R.s.

The agent came on the line, said his name was Eric, and asked the caller to explain his problem. Eric had a slight Indian accent and spoke in a high, clear voice. He probably worked at a call center in Bangalore for a few dollars an hour, although his pay was likely based on how efficiently he could process the calls. “The company doesn’t want to spend more money on the call, because it’s a cost,” Alwan said. The caller’s voice gave the impression that he was white (particularly the way he pronounced the “u” in “duuude”) and youthful, around thirty:

CALLER: Hey, what’s going on is, ah, I got a return-payment notice, right?
AGENT: Mhm.
CALLER: And I checked with my bank, and my bank was saying, well, it didn’t even get to you . . . they didn’t reject it. So then I was just, like, what’s the issue, and then, ah, you guys charge to pay over the phone, so that’s why it’s not done over the phone, so that’s why I do it on the Internet, so—
AGENT: O.K.
CALLER: So I don’t . . . know what’s going on.

The caller sounded relaxed, but if you listened closely you could hear his voice welling with quiet anger.

The agent quickly looked up the man’s record and discovered that he had typed in his account number incorrectly. The caller accepted the agent’s explanation but thought he shouldn’t be liable for the returned-payment charge. He said, “There’s nothing that can be done with that return fee, dude?” The agent explained that another company had levied the charge, but the caller took no notice. “I mean, I would be paying it over the phone, so you guys wanna charge people for paying over the phone, and I’ll be—”

People express anger in two different ways. There’s “cold” anger, in which words may be overarticulated but spoken softly, and “hot” anger, in which voices are louder and pitched higher. At first, the caller’s anger was cold:

AGENT: O.K., sir. I’m gonna go ahead and explain this. . . . O.K., so on the information that you put this last time it was incorrect, so I apologize that you put it incorrectly on the site.
CALLER: O.K., we got past that, bro. So tell me something I don’t know. . . .
AGENT: Let’s see . . . uh . . . um.
CALLER: Dude, I don’t care what company it is. It’s your company using that company, so you guys charge it. So you guys should be waiving that shit-over-the-phone shit, pay by phone.
AGENT: But why don’t you talk to somebody else, sir. One moment.

By now, the caller’s anger was hot. He was put on hold, but B.B.N. was still listening:

CALLER: Motherfucker, I swear. You fucking pussy, you probably don’t even have me on hold, you little fucked-up dick. You’re gonna wait a long time, bro.
You little bitch, I’ll fucking find out who you are, you little fucking ho.

After thirty seconds, we could hear bubbling noises—a bong, Alwan thought—and then coughing. Not long afterward, the caller hung up.

This spring marked the fortieth anniversary of HAL, the conversational computer that was brought to life on the screen by Stanley Kubrick and Arthur C. Clarke, in “2001: A Space Odyssey.” HAL has a calm, empathic voice—a voice that is warmer than the voices of the humans in the movie, which are oddly stilted and false.HAL says that he became operational in Urbana, Illinois, in 1992, and offers to sing a song. HAL not only speaks perfectly; he seems to understand perfectly, too. I was a nine-year-old nerd in the making when the film came out, in 1968, and I’ve been waiting for a computer to talk to ever since—a fantasy shared by many computer geeks. Bill Gates has been touting speech recognition as the next big thing in computing for at least a decade. By giving computers the ability to understand speech, humankind would marry its two greatest technologies: language and toolmaking. To believers, this union can only be a matter of time.

Forty years after “2001,” how close are we to talking to computers? Today, you can use your voice to buy airplane tickets, transfer money, and get a prescription filled. If you don’t want to type, you can use one of the current crop of dictation programs to transcribe your speech; these have been improving steadily and now work reasonably well. If you are driving a car with an onboard navigator, you can get directions in one of dozens of different voices, according to your preference. In a car equipped with Sync—a collaboration of Ford, Microsoft, and Nuance, the largest speech-technology company in the world—you can use your voice to place a phone call or to control your iPod, both of which are useful when you are in what’s known in the speech-recognition industry as “hands-busy, eyes-busy” situations. State-of-the-art I.V.R.s, such as Google’s voice-based 411 service, offer natural-language understanding—you can speak almost as you would to a human operator, as opposed to having to choose from a set menu of options. I.V.R. designers create vocal personas like Julie, the perky voice that answers Amtrak’s 800 number; these voices can be “tuned” according to a company’s branding needs. Calling Virgin Mobile gets you a sassy-voiced young woman, who sounds as if she’s got her feet up on her desk.

Still, these applications of speech technology, useful though they can be, are a far cry from HAL—a conversational computer. Computers still flunk the famous Turing Test, devised by the British mathematician Alan Turing, in which a computer tries to fool a person into thinking that it’s human. And, even within limited applications, speech recognition never seems to work as well as it should. North Americans spent forty-three billion minutes on the line with an I.V.R. in 2007; according to one study, only one caller in ten was satisfied with the experience. Some companies have decided to switch back to touch-tone menus, after finding that customers prefer pushing buttons to using their voices, especially when they are inputting private information, such as account numbers. Leopard, Apple’s new operating system for the Mac, responds to voice commands, which is wonderful for people with handicaps and disabilities but extremely annoying if you have to listen to Alex, its computer-generated voice, converse with a co-worker all day.

Roger Schank was a twenty-two-year-old graduate student when “2001” was released. He came toward the end of what today appears to have been a golden era of programmer-philosophers—men like Marvin Minsky and Seymour Papert, who, in establishing the field of artificial intelligence, inspired researchers to create machines with human intelligence. Schank has spent his career trying to make computers simulate human memory and learning. When he was young, he was certain that a conversational computer would eventually be invented. Today, he’s less sure. What changed his thinking? Two things, Schank told me: “One was realizing that a lot of human speech is just chatting.” Computers proved to be very good at tasks that humans find difficult, like calculating large sums quickly and beating grand masters at chess, but they were wretched at this, one of the simplest of human activities. The other reason, as Schank explained, was that “we just didn’t know how complicated speech was until we tried to model it.” Just as sending men to the moon yielded many fundamental insights into the nature of space, so the problem of making conversational machines has taught scientists a great deal about how we hear and speak. As the Harvard cognitive scientist Steven Pinker wrote to me, “The consensus as far as I have experienced it among A.I. researchers is that natural-language processing is extraordinarily difficult, as it could involve the entirety of a person’s knowledge, which of course is extraordinarily difficult to model on a computer.” After fifty years of research, we aren’t even close.

Speech begins with a puff of breath. The diaphragm pushes air up from the lungs, and this passes between two small membranes in the upper windpipe, known as the vocal folds, which vibrate and transform the breath into sound waves. The waves strike hard surfaces inside the head—teeth, bone, the palate. By changing the shape of the mouth and the position of the tongue, the speaker makes vowels and consonants and gives timbre, tone, and color to the sound.

That process, being mechanical, is not difficult to model, and, indeed, humans had been trying to make talking machines long before A.I. existed. In the late eighteenth century, a Hungarian inventor named Wolfgang von Kempelen built a speaking machine by modelling the human vocal tract, using a bellows for lungs, a reed from a bagpipe for the vocal folds, and a keyboard to manipulate the “mouth.” By playing the keys, an operator could form complete phrases in several different languages. In the nineteenth century, Kempelen’s machine was improved on by Sir Charles Wheatstone, and that contraption, which was exhibited in London, was seen by the young Alexander Graham Bell. It inspired him to try to create his own devices, in the hope of allowing non-hearing people (Bell’s mother and his wife were deaf) to speak normally. He didn’t succeed, but his early efforts led to the invention of the telephone.

In the twentieth century, researchers created electronic talking machines. The first, called the Voder, was engineered by Bell Labs—the famed research division of A.T. & T.—and exhibited at the 1939 World’s Fair, in New York. Instead of a mechanical system made of a reed and bellows, the Voder generated sounds with electricity; as with Kempelen’s speaking machine, a human manipulated keys to produce words. The mechanical-sounding voice became a familiar attribute of movie robots in the nineteen-fifties (and, later, similar synthetic-voice effects were a staple of nineteen-seventies progressive rock). In the early sixties, Bell Labs programmed a computer to sing “Daisy, Daisy, give me your answer do.” Arthur C. Clarke, who visited the lab, heard the machine sing, and he and Kubrick subsequently used the same song in HAL’s death scene.

Hearing is more complicated to model than talking, because it involves signal processing: converting sound from waves of air into electrical impulses. The fleshy part of the ear and the ear canal capture sound waves and direct them to the eardrum, which vibrates as it is struck. These vibrations then push on the ossicles, which form a three-boned lever—that Rube Goldbergian contraption of the middle ear—that helps amplify the sound. The impulses pass into the fluid of the cochlea, which is lined with tiny hairs called cilia. They translate the impulses into electrical signals, which then travel along neural pathways to the brain. Once signals reach the brain, they are “recognized,” either by associative memories or by a rules-based system—or, as Pinker has argued, by some combination of the two.

The human ear is exquisitely sensitive; research has shown, for example, that people can distinguish between hot and cold coffee simply by hearing it poured. The ear is particularly attentive to the human voice. We can differentiate among different voices speaking together, and we can isolate voices in the midst of traffic and loud music, and we can tell the direction from which a voice is coming—all of which are difficult for computers to do. We can hear smiles at the other end of a telephone call; the ear recognizes the sound variations caused by the spreading of the lips. That’s why call-center workers are told to smile no matter what kind of abuse they’re taking.

The first attempts at speech recognition were made in the nineteen-fifties and sixties, when the A.I. pioneers tried to simulate the way the human mind apprehends language. But where do you start? Even a simple concept like “yes” might be expressed in dozens of different ways—including “yes,” “ya,” “yup,” “yeah,” “yeayuh,” “yeppers,” “yessirree,” “aye, aye,” “mmmhmm,” “uh-huh,” “sure,” “totally,” “certainly,” “indeed,” “affirmative,” “fine,” “definitely,” “you bet,” “you betcha,” “no problemo,” and “okeydoke”—and what’s the rule in that? At Nuance, whose headquarters are outside Boston, speech engineers try to anticipate all the different ways people might say yes, but they still get surprised. For example, designers found that Southerners had more trouble using the system than Northerners did, because when instructed to answer “yes” or “no” Southerners regularly added “ma’am” or “sir,” depending on the I.V.R.’s gender, and the computer wasn’t programmed to recognize that. Also, language isn’t static; the rules change. Researchers taught machines that when the pitch of a voice rises at the end of a sentence it usually means a question, only to have their work spoiled by the emergence of what linguists call “uptalk”—that Valley Girl way of making a declarative sentence sound like a question?—which is now ubiquitous across the United States.

In the seventies and eighties, many speech researchers gradually moved away from efforts to determine the rules of language and took a probabilistic approach to speech recognition. Statistical “learning algorithms”—methods of constructing models from streams of data—were the wheel on which the back of the A.I. culture was broken. As David Nahamoo, the chief technology officer for speech at I.B.M.’s Thomas J. Watson Research Center, told me, “Brute-force computing, based on probability algorithms, won out over the rule-based approach.” A speech recognizer, by learning the relative frequency with which particular words occur, both by themselves and within the context of other words, could be “trained” to make educated guesses. Such a system wouldn’t be able to understand what words mean, but, given enough data and computing power, it might work in certain, limited vocabulary situations, like medical transcription, and it might be able to perform machine translation with a high degree of accuracy.

In 1969, John Pierce, a prominent member of the staff of Bell Labs, argued in an influential letter to the Journal of the Acoustical Society of America, entitled “Whither Speech Recognition,” that there was little point in making machines that had speech recognition but no speech understanding. Regardless of the sophistication of the algorithms, the machine would still be a modern version of Kempelen’s talking head—a gimmick. But the majority of researchers felt that the narrow promise of speech recognition was better than nothing.

In 1971, the Defense Department’s Advanced Research Projects Agency made a five-year commitment to funding speech recognition. Four institutions—B.B.N., I.B.M., Stanford Research Institute, and Carnegie Mellon University—were selected as contractors, and each was given the same guidelines for developing a speech recognizer with a thousand-word vocabulary. Subsequently, additional projects were funded that might be useful to the military. One was straight out of “Star Trek”: a handheld device that could automatically translate spoken words into other languages. Another was software that could read foreign news media and render them into English.

In addition to DARPA, funding for speech recognition came from telephone companies—principally at Bell Labs—and computer companies, most notably I.B.M. The phone companies wanted voice-based automated calling, and the computer companies wanted a voice-based computer interface and automated dictation, which was a “holy grail project” (a favorite phrase of the industry). But devising a speech recognizer that worked consistently and accurately in real-world situations proved to be much harder than anyone had anticipated. It wasn’t until the early nineties that companies finally began to bring products to the consumer marketplace, but these products rarely worked as advertised. The fledgling industry went through a tumultuous period. One industry leader, Lernout & Hauspie, flamed out, in a spectacular accounting scandal.

Whether its provenance is academic or corporate, speech-recognition research is heavily dependent on the size of the data sample, or “corpus”—the sheer volume of speech you work with. The larger your corpus, the more data you can feed to the learning algorithms and the better the guesses they can make. I.B.M. collects speech not only in the lab and from broadcasts but also in the field. Andy Aaron, who works at the Watson Research Center, has spent many hours recording people driving or sitting in the front seats of cars in an effort to develop accurate speech models for automotive commands. That’s because, he told me, “when people speak in cars they don’t speak the same way they do in an office.” For example, we talk more loudly in cars, because of a phenomenon known as the Lombard effect—the speaker involuntarily raises his voice to compensate for background noise. Aaron collects speech both for recognizers and for synthesizers—computer-generated voices. “Recording for the recognizer and for the synthesizer couldn’t be more different,” he said. “In the case of the recognizer, you are teaching the system to correctly identify an unknown speech sound. So you feed it lots and lots of different samples, so that it knows all the different ways Americans might say the phoneme ‘oo.’ A synthesizer is the opposite. You audition many professional speakers and carefully choose one, because you like the sound of his voice. Then you record that speaker for dozens of hours, saying sentences that contain many diverse combinations of phonemes and common words.”

B.B.N. came to speech recognition through its origins as an acousticalengineering firm. It worked on the design of Lincoln Center’s Philharmonic Hall in the mid-sixties, and did early research in measuring noise levels at airports, which led to quieter airplane engines. In 1997, B.B.N. was bought by G.T.E., which subsequently merged with Bell Atlantic to form Verizon. In 2004, a group of B.B.N. executives and investors put together a buyout, and the company became independent again. The speech they use to train their recognizers comes from a shared bank, the Linguistic Data Consortium.

During my visit to Cambridge, I watched as a speech engine transcribed a live Al Jazeera broadcast into more or less readable English text, with only a three-minute lag time. In another demo, software captured speech from podcasts and YouTube videos and converted it into text, with impressive accuracy—a technology that promises to make video and audio as easily searchable as text. Both technologies are now available commercially, in B.B.N.’s Broadcast Monitoring System and in EveryZing, its audio-and-video search engine. I also saw B.B.N’s English-to-Iraqi Arabic translator; I had seen I.B.M.’s, known as the Multilingual Automatic Speech-to-Speech Translator, or MASTOR, the week before. Both worked amazingly well. At I.B.M., an English speaker made a comment (“We are here to provide humanitarian assistance for your town”) to an Iraqi. The machine repeated his sentence in English, to make sure it was understood. The MASTOR then translated the sentence into Arabic and said it out loud. The Iraqi answered in Arabic; the machine repeated the sentence in Arabic and then delivered it in English. The entire exchange took about five seconds, and combined state-of-the-art speech recognition, voice synthesis, and machine translation. Granted, the conversation was limited to what you might discuss at a checkpoint in Iraq. Still, for what they are, these translators are triumphs of the statistics-based approach.

What’s missing from all these programs, however, is emotional recognition. The current technology can capture neither the play of emphasis, rhythm, and intonation in spoken language (which linguists call prosody) nor the emotional experience of speaking and understanding language. Descartes favored a division between reason and emotion, and considered language to be a vehicle of the former. But speech without emotion, it turns out, isn’t really speech. Cognitively, the words should mean the same thing, regardless of their emotional content. But they don’t.

Speech recognition is a multidisciplinary field, involving linguists, psychologists, phoneticians, acousticians, computer scientists, and engineers. At speech conferences these days, emotional recognition is a hot topic. Julia Hirschberg, a professor of computer science at Columbia University, told me that at the last prosody conference she attended “it seemed like three-quarters of the presentations were on emotional recognition.” Research is focussed both on how to recognize a speaker’s emotional state and on how to make synthetic voices more emotionally expressive.

Elizabeth Shriberg, a senior researcher in the speech group at S.R.I. International (formerly Stanford Research Institute), said, “Especially when you talk about emotional speech, there is a big difference between acted speech and real speech.” Real anger, she went on, often builds over a number of utterances, and is much more variable than acted anger. For more accurate emotional recognition, Shriberg said, “we need the kind of data that you get from 911 and directory-assistance calls. But you can’t use those, for privacy reasons, and because they’re proprietary.”

At SAIL—the Speech Analysis and Interpretation Laboratory, on the campus of the University of Southern California, in Los Angeles—researchers work mostly with scripted speech, which students collect from actors in the U.S.C. film and drama programs. Shrikanth Narayanan, who runs the lab, is an electrical engineer, and the students in his emotion-research group are mainly engineers and computer scientists. One student was studying what happens when a speaker’s face and voice convey conflicting emotions. Another was researching how emotional states affect the way people move their heads when they talk. The research itself can be a grind. Students painstakingly listen to voices expressing many different kinds of emotion and tag each sample with information, such as how energetic the voice is and its “valence” (whether it is a negative or a positive emotion). Anger and elation are examples of emotions that have different valences but similar energy; humans use context, as well as facial and vocal cues, to distinguish them. Since the researchers have only the voice to work with, at least three of them are required to listen and decide what the emotion is. Students note voice quality, pacing, language, “disfluencies” (false starts, “um”s), and pitch. They make at least two different data sets, so that they can use separate ones for training the computer and for testing it.

Facial expressions are generally thought to be universal, but so far Narayanan’s lab hasn’t found that similarly universal vocal cues for emotions are as clearly established. “Emotions aren’t discrete,” Narayanan said. “They are a continuum, and it isn’t clear to any one perceiver where one emotion ends and another begins, so you end up studying not just the speaker but the perceiver.” The idea is that if you could train the computer to sense a speaker’s emotional state by the sound of his voice, you could also train it to respond in kind—the computer might slow down if it sensed that the speaker was confused, or assume a more soothing tone of voice if it sensed anger. One possible application of such technology would be video games, which could automatically adapt to a player’s level based on the stress in his voice. Narayanan also mentioned simulations—such as the computer-game-like training exercises that many companies now use to prepare workers for a job. “The program would sense from your voice if you are overconfident, or when you are feeling frustrated, and adjust accordingly,” he said. That reminded me of the moment in the novel “2001” when HAL, after discovering that the astronauts have doubts about him, decides to kill them. While struggling with one of the astronauts, Dave, for control of the ship, HAL says, “I can tell from your voice harmonics, Dave, that you’re badly upset. Why don’t you take a stress pill and get some rest?”

But, apart from call-center voice analytics, it’s hard to find many credible applications of emotional recognition, and it is possible that true emotional recognition is beyond the limits of the probabilistic approach. There are futuristic projects aimed at making emotionally responsive robots, and there are plans to use such robots in the care of children and the elderly. “But this is very long-range, obviously,” Narayanan said. In the meantime, we are going to be dealing with emotionless machines.

There is a small market for voice-based lie detectors, which are becoming a popular tool in police stations around the country. Many are made by Nemesysco, an Israeli company, using a technique called “layered voice analysis” to analyze some hundred and thirty parameters in the voice to establish the speaker’s pyschological state. The academic world is skeptical of voice-based lie detection, because Nemesysco will not release the algorithms on which its program is based; after all, they are proprietary. Layered voice analysis has failed in two independent tests. Nemesysco’s American distributor says that’s because the tests were poorly designed. (The company played Roger Clemens’s recent congressional testimony for me through its software, so that I could see for myself the Rocket’s stress levels leaping.) Nevertheless, according to the distributor more than a thousand copies of the software have been sold—at fourteen thousand five hundred dollars each—to law-enforcement agencies and, more recently, to insurance companies, which are using them in fraud detection.

One of the most fully realized applications of emotional recognition that I am aware of is the aggression-detection system developed by Sound Intelligence, which has been deployed in Rotterdam and Amsterdam, and other cities in the Netherlands. It has also been installed in the English city of Coventry, and is being tested in London and Manchester. One of the designers, Peter van Hengel, explained to me that the idea grew out of a project at the University of Groningen, which simulated the workings of the inner ear with computer models. “A colleague of mine applied the same inner-ear model to trying to recognize speech amid noise,” he said, “and found that it could be used to select the parts belonging to the speech and leave out the noise.” They founded Sound Intelligence in 2000, initially focussing on speech-noise separation for automatic speech recognition, with a sideline in the analysis of non-speech sounds. In 2003, the company was approached by the Dutch national railroad, which wanted to be able to detect several kinds of sound that might indicate trouble in stations and on trains (glass-breaking, graffiti-spraying, and aggressive voices). This project developed into an aggression-detection system based on the sound of people shouting: the machine detects the overstressing of the vocal cords, which occurs only in real aggression. (That’s one reason actors only approximate anger; the real thing can damage the voice.)

The city of Groningen has installed an aggression-detector at a busy intersection in an area full of pubs. Elevated microphones spaced thirty metres apart run along both sides of the street, joining an existing network of cameras. These connect to a computer at the police station in Groningen. If the system hears certain sound patterns that correspond with aggression, it sends an alert to the police station, where the police can assess the situation by examining closed-circuit monitors: if necessary, officers are dispatched to the scene. This is no HAL, either, but the system is promising, because it does not pretend to be more intelligent than it is.

I thought the problem with the technology would be false positives—too many loud noises that the machine mistook for aggression. But in Groningen, at least, the problem has been just the opposite. “Groningen is the safest city in Holland,” van Hengel said, ruefully. “There is virtually no crime. We don’t have enough aggression to train the system properly.” ♦

Crowdsourcing, for the Birds (NY Times)

NY Times, August 19, 2013

By JIM ROBBINS

HELENA, Mont. — On a warm morning not long ago on the shore of a small prairie lake outside this state capital, Bob Martinka trained his spotting scope on a towering cottonwood tree heavy with blue heron nests. He counted a dozen of the tall, graceful birds and got out his smartphone, not to make a call but to type the number of birds and the species into an app that sent the information to researchers in New York.

 

Mapping Bird Species Heat maps show the northward migration of the chimney swift as modeled by the eBird network. Brighter colors indicate higher probabilities of finding the species.

Mr. Martinka, a retired state wildlife biologist and an avid bird-watcher, is part of the global ornithological network eBird. Several times a week he heads into the mountains to scan lakes, grasslands, even the local dump, and then reports his sightings to the Cornell Lab of Ornithology, a nonprofit organization based at Cornell University.

“I see rare gulls at the dump quite frequently,” Mr. Martinka said, scanning a giant mound of bird-covered trash.

Tens of thousands of birders are now what the lab calls “biological sensors,” turning their sightings into digital data by reporting where, when and how many of which species they see. Mr. Martinka’s sighting of a dozen herons is a tiny bit of information, but such bits, gathered in the millions, provide scientists with a very big picture: perhaps the first crowdsourced, real-time view of bird populations around the world.

West Kassel. A western meadowlark.

Birds are notoriously hard to count. While stationary sensors can measure things like carbon dioxide levels and highway traffic, it takes people to note the type and number of birds in an area. Until the advent of eBird, which began collecting daily global data in 2002, so-called one-day counts were the only method.

While counts like the Audubon Christmas Bird Count and the Breeding Bird Survey bring a lot of people together on one day to make bird observations across the country, and are scientifically valuable, they are different because they don’t provide year-round data.

And eBird’s daily view of bird movements has yielded a vast increase in data — and a revelation for scientists. The most informative product is what scientists call a heat map: a striking image of the bird sightings represented in various shades of orange according to their density, moving through space and time across black maps. Now, more than 300 species have a heat map of their own.

“As soon as the heat maps began to come out, everybody recognized this is a game changer in how we look at animal populations and their movement,” said John W. Fitzpatrick, director of the Cornell Lab. “Really captivating imagery teaches us more effectively.”

It was long believed, for example, that the United States had just one population of orchard orioles. Heat maps showed that the sightings were separated by a gap, meaning there are not one but two genetically distinct populations.

Moreover, the network offers a powerful way to capture data that was lost in the old days. “People for generations have been accumulating an enormous amount of information about where birds are and have been,” Dr. Fitzpatrick said. “Then it got burned when they died.”

No longer: eBird has compiled 141 million reports, or bits, and the number is increasing by 40 percent a year. In May, eBird gathered a record 5.6 million new observations from 169 countries. (Mr. Martinka’s sighting of 12 herons at once, for example, is considered one species observation, or bit.)

The system also offers incentives for birders to stay involved, with apps that enable them to keep their life lists (records of the species they have seen), compare their sightings with those of friends (and rivals), and know where to look for birds they haven’t seen before.

“When you get off the plane and turn your phone on,” Dr. Fitzpatrick said, “you can find out what has been seen near you over the last seven days and ask it to filter out the birds you haven’t seen yet, so with a quick look you can add to your life list.”

The system is not without problems. Citizen scientists may not be as precise in reporting data as experienced researchers are, like the ones in the Breeding Bird Survey. Cornell has tried to solve that problem by hiring top birders to travel around the world to train people like Mr. Martinka in methodology. And 500 volunteer experts read the submissions for accuracy, rejecting about 2 percent. Rare-bird sightings get special scrutiny.

The engine that makes eBird data usable is machine learning, or artificial intelligence — a combination of software and hardware that sorts through disparities, gaps and flaws in data collection, improving as it goes along.

“Machine learning says, ‘I know these data are sloppy, but fortunately there’s a lot of it,’ ” Dr. Fitzpatrick said. “It takes chunks of these data and sorts through to find patterns in the noise. These programs are learning as they go, testing and refining and getting better and better.”

Still, some experts question eBird’s validity. John Sauer, a wildlife biologist with the United States Geological Survey, says that bird-watchers’ reports lack scientific rigor. Rather than randomness, he said, “you get a lot of observations from where people like to go.” And he doubts that Cornell has proved the reliability of its machine learning efforts.

Still, the information has promise, he said, “and it’s played a powerful role in coordinating birders for recording observations, and encouraging bird-watching.”

And the data are being used by a wide array of researchers and conservationists.

Cagan H. Sekercioglu, a professor of ornithology at the University of Utah who has used similar bird-watching data in his native Turkey to study the effects of climate change on birds, called eBird “a phenomenal resource” and said that it was “getting young people involved in natural history, which might seem slow and old-fashioned in the age of instant online gratification.”

Data about bird populations can help scientists understand other changes in the natural world and be a marker for the health of overall biodiversity. “Birds are great indicators because they occur in all environments,” said Steve Kelling, the director of information science at the Cornell bird lab.

A decline in Eastern meadowlarks in part of New York State, for example, suggests that their habitat is shrinking — bad news for other species that depend on the same habitat. In California, eBird data is being used by some planners to decide where cities and towns should steer development.

The data is also being combined with radar and weather data by BirdCast, another Cornell bird lab project that forecasts migration patterns with the aim of protecting birds as they move through a gantlet of threats. “We can predict migration events that would be usable for the timing of wind generation facilities to be turned off at night,” Dr. Fitzpatrick said.

In California, biologists use the migration data to track waterfowl at critical times. When the birds are headed through the Central Valley, for example, they can ask rice farmers to flood their fields to create an improvised wetland habitat before the birds arrive. “The resolution is at such a level of detail they can make estimates of where species occur almost at a field-by-field level,” Mr. Kelling said.

EBird data has been used in Britain, too, combined with that of a similar program called BirdTrack, which uses radar images, weather models and even data from microphones on top of buildings to record the sounds of migrating birds at night.

And for bird-watchers, the eBird project has given their pastime a new sense of purpose. “It’s a really neat tool,” Mr. Martinka said. “If you see one bird or a thousand, it’s significant.”

Climate Panel Cites Near Certainty on Warming (N.Y.Times)

Tim Wimborne/Reuters. A new report from the Intergovernmental Panel on Climate Change states that the authors are now 95 percent to 100 percent confident that human activity is the primary influence on planetary warming.

By 

Published: August 19, 2013

An international panel of scientists has found with near certainty that human activity is the cause of most of the temperature increases of recent decades, and warns that sea levels could conceivably rise by more than three feet by the end of the century if emissions continue at a runaway pace.

The level of carbon dioxide, the main greenhouse gas, is up 41 percent since the Industrial Revolution. Emissions from facilities like coal-fired power plants contribute.

The scientists, whose findings are reported in a draft summary of the next big United Nations climate report, largely dismiss a recent slowdown in the pace of warming, which is often cited by climate change doubters, attributing it most likely to short-term factors.

The report emphasizes that the basic facts about future climate change are more established than ever, justifying the rise in global concern. It also reiterates that the consequences of escalating emissions are likely to be profound.

“It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010,” the draft report says. “There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level and changed some climate extremes in the second half of the 20th century.”

The draft comes from the Intergovernmental Panel on Climate Change, a body of several hundred scientists that won the Nobel Peace Prize in 2007, along with Al Gore. Its summaries, published every five or six years, are considered the definitive assessment of the risks of climate change, and they influence the actions of governments around the world. Hundreds of billions of dollars are being spent on efforts to reduce greenhouse emissions, for instance, largely on the basis of the group’s findings.

The coming report will be the fifth major assessment from the group, created in 1988. Each report has found greater certainty that the planet is warming and greater likelihood that humans are the primary cause.

The 2007 report found “unequivocal” evidence of warming, but hedged a little on responsibility, saying the chances were at least 90 percent that human activities were the cause. The language in the new draft is stronger, saying the odds are at least 95 percent that humans are the principal cause.

On sea level, which is one of the biggest single worries about climate change, the new report goes well beyond the assessment published in 2007, which largely sidestepped the question of how much the ocean could rise this century.

The new report also reiterates a core difficulty that has plagued climate science for decades: While averages for such measures as temperature can be predicted with some confidence on a global scale, the coming changes still cannot be forecast reliably on a local scale. That leaves governments and businesses fumbling in the dark as they try to plan ahead.

On another closely watched issue, the scientists retreated slightly from their 2007 position.

Regarding the question of how much the planet could warm if carbon dioxide levels in the atmosphere doubled, the previous report largely ruled out any number below 3.6 degrees Fahrenheit. The new draft says the rise could be as low as 2.7 degrees, essentially restoring a scientific consensus that prevailed from 1979 to 2007.

But the draft says only that the low number is possible, not that it is likely. Many climate scientists see only a remote chance that the warming will be that low, with the published evidence suggesting that an increase above 5 degrees Fahrenheit is more likely if carbon dioxide doubles.

The level of carbon dioxide, the main greenhouse gas, is up 41 percent since the Industrial Revolution, and if present trends continue it could double in a matter of decades.

Warming the entire planet by 5 degrees Fahrenheit would add a stupendous amount of energy to the climate system. Scientists say the increase would be greater over land and might exceed 10 degrees at the poles.

They add that such an increase would lead to widespread melting of land ice, extreme heat waves, difficulty growing food and massive changes in plant and animal life, probably including a wave of extinctions.

The new document is not final and will not become so until an intensive, closed-door negotiating session among scientists and government leaders in Stockholm in late September. But if the past is any guide, most of the core findings of the document will survive that final review.

The document was leaked over the weekend after it was sent to a large group of people who had signed up to review it. It was first reported on in detail by the Reuters news agency, and The New York Times obtained a copy independently to verify its contents.

The Intergovernmental Panel on Climate Change does no original research, but instead periodically assesses and summarizes the published scientific literature on climate change.

The draft document “is likely to change in response to comments from governments received in recent weeks and will also be considered by governments and scientists at a four-day approval session at the end of September,” the panel’s spokesman, Jonathan Lynn, said in a statement Monday. “It is therefore premature and could be misleading to attempt to draw conclusions from it.”

After winning the Nobel Peace Prize six years ago, the group became a political target for climate doubters, who helped identify minor errors in the 2007 report. This time, the panel adopted rigorous procedures in the hope of preventing such mistakes.

Some climate doubters challenge the idea that the earth is warming at all; others concede that it is, but deny human responsibility; still others acknowledge a human role, but assert that the warming is likely to be limited and the impacts manageable. Every major scientific academy in the world has warned that global warming is a serious problem.

The panel shifted to a wider range for the potential warming, dropping the plausible low end to 2.7 degrees, after a wave of recent studies saying higher estimates were unlikely. But those studies are contested, and scientists at Stockholm are likely to debate whether to stick with that language.

Michael E. Mann, a climate scientist at Pennsylvania State University, said he feared the intergovernmental panel, in writing its draft, had been influenced by criticism from climate doubters, who advocate even lower numbers. “I think the I.P.C.C. on this point has once again erred on the side of understating the degree of the likely changes,” Dr. Mann said.

However, Christopher B. Field, a researcher at the Carnegie Institution for Science who serves on the panel but was not directly involved in the new draft, said the group had to reflect the full range of plausible scientific views.

“I think that the I.P.C.C. has a tradition of being very conservative,” Dr. Field said. “They really want the story to be right.”

Regarding the likely rise in sea level over the coming century, the new report lays out several possibilities. In the most optimistic, the world’s governments would prove far more successful at getting emissions under control than they have been in the recent past, helping to limit the total warming.

In that circumstance, sea level could be expected to rise as little as 10 inches by the end of the century, the report found. That is a bit more than the eight-inch increase in the 20th century, which proved manageable even though it caused severe erosion along the world’s shorelines.

At the other extreme, the report considers a chain of events in which emissions continue to increase at a swift pace. Under those conditions, sea level could be expected to rise at least 21 inches by 2100 and might increase a bit more than three feet, the draft report said.

Hundreds of millions of people live near sea level, and either figure would represent a challenge for humanity, scientists say. But a three-foot rise in particular would endanger many of the world’s great cities — among them New York; London; Shanghai; Venice; Sydney, Australia; Miami; and New Orleans.

A version of this article appears in print on August 20, 2013, on page A1 of the New York edition with the headline: Climate Panel Cites Near Certainty on Warming.

Occupying Wall Street: Places and Spaces of Political Action (Places)

PEER REVIEWED: JONATHAN MASSEY & BRETT SNYDER

The Design Observer Group

09.17.12
Occupy Wall Street digital activity timeline
Occupy Wall Street activity online. Click image to enlarge. [Timeline by the authors]

For nine weeks last fall crowds gathered every evening at the eastern end of Zuccotti Park, where a shallow crescent of stairs creates a modest amphitheater, to form the New York City General Assembly. A facilitator reviewed rules for prioritizing speakers and gestures by which participants could signal agreement or dissent. Over two hours or more, they worked through issues of common concern — every word repeated by the assembly, which formed a human microphone amplifying the speaker’s voice — until they reached consensus.

Such was the daily practice of Occupy Wall Street, paralleled in more than a thousand cities around the world. Participants borrowed tactics from Quaker meetings, Latin American popular assemblies, Spanish acampadas, and other traditions of protest and political organization. They also enacted something foundational to the western democratic tradition: constituting a polity as a group of speaking bodies gathered in a central public place.

At the same time, another crowd assembled in a range of online spaces. Moving between the physical and the virtual, participants navigated a hypercity built of granite and asphalt, algorithms and information, appropriating its platforms and creating new structures within it. As they posted links, updates, photos and videos on social media sites; as they deliberated in chat rooms and collaborated on crowdmaps; as they took to the streets with smartphones, occupiers tested the parameters of this multiply mediated world.

What is the layout of this place? What are its codes and protocols? Who owns it? How does its design condition opportunities for individual and collective action? Looking closely at these questions, we learn something about the possibilities for public life and political action created at the intersection of urban places and online spaces.


Top: Occupiers camp in Liberty Plaza as news vans line up across the street. Middle: Detail of#OccupyMap. Bottom: Occupy coordinators meet in the atrium of 60 Wall Street. [Photos by Jonathan Massey]

Occupying the Public Square 
Zuccotti Park — or Liberty Plaza — was the site not only of General Assembly but also of the bustling camp that materialized and sustained the occupation. As architects, we were fascinated by the intensive use of this privately owned public space. As citizens, we were inspired by the movement’s critique of the U.S. political system and its experiment with alternate forms of social organization. After the arrest of 700 protesters on the Brooklyn Bridge, Jonathan began visiting Liberty Plaza and occasionally participating in rallies. Brett tracked the movement’s use of new media to expose inequalities in wealth distribution. Jonathan enlisted friends to survey and document the encampment, while Brett developed an interactive project, Public Space 2.0, that linked Occupy to broader questions about public space. Following the eviction of occupiers in New York and other cities, we decided to collaborate on a project examining the spatial and social organization of Liberty Plaza.

In the tradition of urban demonstrations and sit-ins, the camp claimed a prominent and symbolically charged city space in order to call attention to a political cause. It provided logistical support as the first day of protest extended into a two-month occupation. It gave visitors a point of entry into the movement and its ideas. Moreover, it prefigured in microcosm the alternative polity desired by many participants, modeling and testing modes of self-organization partly autonomous from those provided by the state and the market.

As such, it embodied one of the defining tensions of Occupy Wall Street: between the aims of protest and prefiguration. [1] One reason for claiming Liberty Plaza was to command the attention of the public and the state. Indeed, the blog post that sparked the movement, by the Canadian magazine Adbusters, urged activists to create “a Tahrir moment” by insistently repeating “one simple demand” akin to the call for Egyptian president Hosni Mubarak’s resignation. [2] But some of the New York activists who planned the occupation pursued a vision of autonomous self-organization and self-government informed by anarchist principles. Occupiers refused to formulate their objectives as political demands, even though doing so might have strengthened their grip on the public imagination. Instead of a unified plea to elected representatives, broadcast from a central square, Occupy yielded a polyphony of discussions in the agoras of the hypercity.

Occupy Wall Street police observation tower
Top: Occupiers in mid-October. Bottom: NYPD Skywatch portable surveillance tower. [Photos by Jonathan Massey]

From its founding on September 17, 2011, the occupation traced contours of regulation and control. Its location, design and construction limned the legal, juridical and police affordances of New York’s public realm, revealing the constraints placed on people assembling to form a counterpublic — a public operating according to practices distinct from those of the mainstream. [3] The declared site of the first protest, carnival, and General Assembly was Chase Manhattan Plaza, but occupiers arrived to find the corporate space closed off by barricades and patrolled by police. Prior General Assemblies had been held in New York public parks and squares, but organizers knew the city tightly controlled those spaces by requiring permits, enforcing nighttime closures and barricading areas. The use of city sidewalks was also curtailed. Bloombergville, a sidewalk encampment near City Hall, had survived for three weeks in July, but a test camp-out on Wall Street on September 1 had been broken up by police. [4] When demonstrators found Chase Plaza closed, they moved to the privately-owned Zuccotti Park, three blocks away, claiming the space with signs, megaphones, sleeping bags and blankets.

The following weeks confirmed that the state would use police control to assert its hegemony over the terms of public assembly and discourse. When protesters crossed the border of Liberty Plaza onto city streets or squares, they encountered “order maintenance policing,” a euphemistic directive that empowers New York police to intervene in public events irrespective of criminal action. Over the past 15 years, the NYPD has expanded the practice to assert control over parades, festivals and rallies, often arresting participants for “disorderly conduct” and releasing them without charge. [5] Under this vague authority, NYPD limited the range and duration of Occupy demonstrations and tightly controlled their internal dynamics through barricades, kettling and arrests.

And yet Occupy Wall Street showed that possibilities foreclosed on private and public land could be actualized in the liminal territory of the city’s privately owned public spaces(POPS) — plazas, arcades and other spaces built by real estate developers in return for density bonuses under the terms of the 1961 Zoning Resolution. [6] The occupation of Zuccotti Park was made possible by ambiguities in the POPS system, which has created places where the city government must negotiate authority with corporate owners as well as site occupants. Even so, the city intervened in the camp’s internal organization and operation: fire marshals prohibited tents and other structures in the early weeks; they removed generators as the weather grew cold in late October; and, shortly after midnight on November 15, police forcibly cleared the park.

Zuccotti Park after eviction of protestors
Top: The planned site of the September 17 protest, Chase Manhattan Plaza, was barricaded at the request of its corporate owners. [Photo by David Shankbone] Bottom: Police patrol Zuccotti Park on November 15 after evicting protesters. [Photo by Jonathan Massey]

During the two-month occupation, protesters rewrote the social and spatial codes that had determined use of the block for decades. Created in the late 1960s as a POPS concession linked to the construction of One Liberty Plaza, the park was rebuilt by new owners Brookfield Properties in 2006 to a design by Cooper Robertson & Partners that serves downtown office workers by encouraging passive recreations like lunch and chess while discouraging active ones like cycling and skateboarding. In a related feature on Places, we look more closely at the Cooper Robertson design and its transformation into the Liberty Plaza encampment.

Stepping partially outside state and market systems, occupiers created their own structures for discussion and governance; for provision of daily services; for medical care and sacred space; for music, dance and art. Some aspects of this counterpublic resembled the exhilarating, liberatory “Temporary Autonomous Zones” described by anarchist writer Hakim Bey. [7] Others were pragmatic, even bureaucratic. Within days, working groups resembling urban agencies — dedicated to issues like Comfort, Medical, Kitchen, Library, Sanitation and Security — created a series of nodes or workstations that cut diagonally across the park. They appropriated design elements such as retaining walls, benches and tables to define functional zones.

In overlaying the permanent landscape of the park with new activities and installations, the occupation created what anthropologist Tim Ingold calls a “taskscape”: a topography of related activities deployed in space and changing over time. [8] Through their patterns of spatial appropriation, occupiers responded to the asymmetries of the park — its slope, the priority of Broadway relative to Trinity Place, and the more favorable sun and wind exposures available in the northeast corner — by programming the plaza along a gradient. Running from north and east to south and west, this gradient shaded from public to private, mind to body, waking to sleeping, and reason to faith. Outreach/Media/Legal claimed the location that afforded the most shelter and the best sun exposure while also being situated far from the noise and dust of the World Trade Center construction site.


Kitchen compost station and The People’s Library. [Photos by Jonathan Massey]

On the austere geometry of a tasteful corporate plaza, just under 33,000 square feet, the occupation created an entire world in which you could meditate, change your wardrobe, update your blog, cook lentils, read a book, sweep up litter, bandage a wound, bang a drum, roll a cigarette, debate how best to challenge corporate hegemony, make art, wash dishes and have sex, usually in the company of others.  The square teemed with friends and strangers, allies and antagonists; it was intensely public and interactive. Daily activities were saturated with a talky sociability in which the challenges and opportunities of every action, every decision, were open to reinterpretation and negotiation. At any moment, the call of “Mic check!” could ring out across the camp, obligating participants to drop personal conversations and become part of a communal discourse. The act of chanting in unison, as a human microphone, created a common sense of purpose, established relationships among neighbors and intensified awareness of surrounding bodies.

This new world could feel exhilarating and inspiring but also threatening and claustral. It was crowded. It was charged with strong emotions. Its core members were working hard, and they were often tired. On top of reforming global capitalism, they had to handle fights, thefts, drug use and sexual assaults, while operating under the strain of official hostility, police surveillance, constant interaction with supportive and hostile visitors, and weather. Radical openness and participatory self-government proved taxing. As the occupation stretched from days into weeks and months, participants took shelter from cold, rain and snow in tents and tarps. The plaza became more internalized and lost some of its intense sociability.

The functional zoning also reinforced sociological differences in the camp. Many of the most active members identified themselves as coordinators or occupiers. These groups were not mutually exclusive, but they gravitated toward spaces in separate ends of the park.Coordinators, who facilitated discussions and posted on blogs, often spent nights at home, while occupiers put their bodies on the line by living and sleeping in the park. A spatial gradient emerged, with occupiers’ tents clustered toward the western end. Not surprisingly, these constituencies were marked by differences in class, education level, ethnicity, sexuality and gender. The Daily Show even aired a skit about the differences, using “uptown” and “downtown” to describe the two ends of the park. [9]

Occupy Wall Street Sanitation Workstation


Top: Sanitation workstation. [Photo by Jonathan Massey] Bottom: Liberty Plaza Site Map drawn by Occupy participant on October 10. Click image to enlarge. [Map by Jake Deg]

Organizers worked hard to build the institutions needed to sustain the micro-city, but its autonomy was inherently limited; the camp was shaped by its adjacencies to the social, commercial and political networks of Lower Manhattan and the Financial District. Businesses provided restrooms. Sympathetic unions made facilities available. Organizations lent kitchen and office space. Individuals donated money, books, clothing and food. Murray Bergtraum High School opened its auditorium to meetings of the OWS Spokescouncil. A local government authority, Manhattan Community Board 1, mediated among protesters, neighborhood residents, Brookfield Properties and city officials in discussions about drum noise and other issues where order maintenance was enforced through claims about “quality of life.”

These interactions extended the spatial and social gradients of Liberty Plaza across a broader urban geography. Dozens of working groups met in the enclosed atrium at 60 Wall Street, a privately owned public space at the base of an office tower built by J.P. Morgan and currently occupied by Deutsche Bank. In that large room, designed by Roche and Dinkeloo and clad in marble and mirror and decorated with palm trees and postmodern grottoes, they shared space with chess-players and well-heeled denizens of the Financial District. From morning to night they used the tables, benches, chairs and wifi of the climate-controlled space as a purposeful, orderly extension of the eastern end of Liberty Plaza, establishing commuting patterns that figured 60 Wall as the Occupy office.

Occupying the Internet 
The Wall Street protests would not have materialized without extensive work by on-the-ground activists in New York. But it was the Adbusters blog post that gave the action a name and date. It also gave them #occupywallstreet, the first of thousands of #Occupy hashtags that enabled the spontaneous assembly of strangers on Twitter and other internet platforms. In the months leading up to the first occupation, and in the year afterward, Occupy established an online presence unmatched in the history of social action, leveraging multiple online spaces to stage protests and to generate a distinctive counter-public and alternative polity.


Top: Occupiers connect via laptops and smartphones from Liberty Plaza. [Photo by David Shankbone] Bottom: Instagram photo sent by Occupy activist: “Riding in a bus with 50 others, in cuffs writing this.” [Photo by pulseprotest]

In the United States, the internet was largely exempt from the state control and censorship that curtailed protest activity on the street, but it was inherently open to surveillance and imposed another set of exclusions based on access to online spaces and protocols. Its various platforms afforded ties that were both broader and weaker than those at Liberty Plaza. Discussions took place in specialized forums and channels quite unlike the multisensory, multiparticipatory assemblies, meetings, marches and rallies of the physical realm. From its inception, Occupy tested the capacities of the internet’s virtual spaces to sustain organizational activity, deliberative discourse and other kinds of public-making. [10]

As with the physical occupation, many online actions had precedent in earlier movements, from the anti-globalization protests of the 1990s to the Arab Spring of 2011. For years U.S. activists have used sites like Indymedia to distribute information and mobilize protest participation. [11] After posting its call to action, Adbusters sent word to its email distribution list and created a Facebook event, mobilizing a pre-existing network of followers. As one of the largest privately owned public spaces online, Facebook became a key platform for the Occupy movement. Facebook profiles such as OccupyWallSt,Gilded.Age and OccupyTogether, created in the weeks leading up to the first protest, provided broadly accessible channels for information. When individuals “liked” or commented on items in these newsfeeds, Occupy ideas propagated through user-generated social networks. Throughout the fall, members used the site’s text, link, note, and photo and video sharing features to endorse events and activities. [12]

During the groundwork phase, organizers also used open-source web-coding tools to create dedicated Occupy websites. The most important were Occupywallst.org, a Github site launched in mid-July as a clearinghouse and contact-point for the movement;NYCGA.net, a WordPress site created a few weeks later to serve the New York City General Assembly and its working groups; and the blog Occupytogether.org. These sites combined newsfeeds and social media links with manifestos, videos, crowdmaps and other resources, and they linked together other sites to create a sprawling landscape of information.


A selection of the more than 1600 posts submitted to the 99 Percent Project in October 2011.

In parallel, organizers tapped the internet’s capacity to build what sociologists Jennifer Earl and Katrina Kimport call “e-movements”: politically effective campaigns that circulate in the media without necessarily coalescing into mass gatherings. Online tools provide immediate and inexpensive site design and back-end functionality, allowing organizations or individuals to launch awareness campaigns and other political actions that demand little money or time from participants. [13] One such tool for Occupy activists was the image-based microblogging site Tumblr. In late summer, the 99 Percent Project invited people to “get known” as part of a majority disenfranchised by the super-rich. Under the slogan “We Are the 99 Percent,” the image blog featured self-portraits of working- and middle-class Americans holding handwritten signs or letters describing the circumstances of their indebtedness. The project called attention to the rise in income inequality and helpedpopularize the rhetoric of “the 99 percent.” [14] After September 17, it became an online analogue to Liberty Plaza, enabling a geographically dispersed set of participants to join the occupation of Wall Street and forging a common consciousness about debt and disenfranchisement. The self-portraits were often shot at a computer desk with a webcam, and overwhelmingly they were set in domestic interiors like living rooms, dens and bedrooms. But the handwritten signs pointed to a world outside those walls, evoking the signs of the homeless explaining their misfortunes and asking for help, as well as the signs of protesters bearing expressions of solidarity and calls to action. [15]


Global crowdmap on the Ushahidi platform. [Screenshot by the authors]

Contours of the Hypercity 
In the summer of 2011, before the first protesters had set foot in Liberty Plaza, the Occupy movement was evolving toward a model of General Assembly that hybridized online and offline discourse. While street activists in New York were practicing consensus decision-making in public parks, online participants were responding to a poll Adbusters created using Facebook’s “question” function: “What is our one demand?” Answers included abolishing capitalism, demilitarizing the police, legalizing marijuana, reinstating the Glass-Steagall Act and freeing the unicorns. (The winner was “Revoke Corporate Personhood.”) Through this asynchronous online polling, Facebook supported a weak form of political discussion that prefigured the stronger and more interactive deliberations that filled Liberty Plaza.

By September 10, General Assembly minutes were being posted online at NYCGA. Over time these became more elaborate, and note-takers projected their evolving documents on a screen in Liberty Plaza so that participants could respond to the minutes-in-the-making. Assembly meetings were livestreamed so that participants across the globe could follow in real time, and some were archived online in audio and video formats. Congregants also livetweeted the assemblies under Twitter handles such as @DiceyTroop and @LibertySqGA. These accounts attracted thousands of followers, many of whom responded to live events, adding a layer of online conversation that augmented the face-to-face assemblies.

Hybrid discussions were the norm for the working groups that handled the day-to-day and week-to-week activity of Occupy Wall Street. During and after the occupation, working groups met regularly at Liberty Plaza, the 60 Wall atrium, Union Square and other locations throughout New York. A blackboard at Liberty Plaza listed some of these meetings, but more reliable information was found online at NYCGA, where nearly every working group had a page with a blog, activity wall, shared documents and event calendar, and discussion forum involving members who had never attended the face-to-face meetings. By spring 2012, the site hosted roughly 90 working groups, some with just a handful of registered users and a couple of posts, others with many hundreds of users and more than 2,000 entries.


Top: Blackboard at Liberty Plaza announces working group meetings. [Photo by Jonathan Massey] Bottom left: Livestream at Occupy Detroit. [Photo by Stephen Boyle] Bottom right: “People’s Mic: Please join the Conversation.” 24/7 internet broadcast from Occupy Wall Street. [Photo by Chris Rojas]

As the weather changed in late October, the Town Planning forum hosted extensive discussions on a topic that simultaneously preoccupied the group’s in-person meetings and the General Assembly: how to sustain the camp into the winter. One participant lit up the forum with a long post advocating event tents that would cover large expanses of the park in communal enclosures, as an alternative to individual camping tents. “Safety teams are unfortunately learning … that privacy equals risk,” wrote Sean McKeown, “because privacy allows for unseen violence, unseen sexual menace, and for drugs, alcohol, and weapons to be kept in shockingly large number if we are to guess by the number of needles found around tents lately since they have gone up.” [16] Members suggested building geodesic domes or frame structures with salvaged materials, or claiming regulatory exemption by designating the camp as a Native American sacred site. The reconfiguration of Liberty Plaza at the beginning of November was negotiated simultaneously in the park, in dispersed work-group meetings, and on the internet.

While online forums, as the Latin term implies, evoke the experience of face-to-face discussion, other online technologies create public spaces without analogue in the physical world. The Twitter hashtag, for example, enables radically new modes of creating, discovering and organizing affinity clusters, which proved useful in movements like the January 25 Egyptian Revolution and the Green Revolution in Iran. In self-conscious emulation of those precedents, Adbusters branded September 17 with the hashtag#occupywallstreet, signaling an expectation that participants would use Twitter to communicate with one another and with larger publics.

It took more than a week for the hashtag to catch on, and from July 25 through the end of August, the four hashtags #occupywallstreet, #occupywallst, #occupy and #ows together accounted for an average of only 27 messages per day. Activity increased in September, and by the day of occupation, Twitter volume on this group of hashtags hit 78,351 as the broader public of participants, bystanders and commentators joined organizers in using the platform for realtime micoblogging of information, opinions and photos. Twitter’s instantaneous syndication was a valuable conduit for time-sensitive news, and its 140-character message limit was well suited to the mobile devices that predominated in Liberty Plaza. Some activists used photo, video and geotagging features on their phones to make Twitter a medium for mapping and building the extended Occupy taskscape. Volume on those four hashtags peaked at 411,117 on November 15, the day protesters were evicted from the park. [17]


Visualization of the Occupy movement online, July to December 2011, including activity on Google, Facebook, Twitter, blogs, and We Are the 99 Percent. Click image to enlarge. [Timeline by the authors]

Many other online spaces provided venues for discourse and arenas for participation. Internet relay chat channels allowed participants to talk to one another, individually and in groups. Live video streams from Liberty Plaza and other camps opened real-time windows onto parks, squares and streets around the world. Occupystreams.org compiled more than 250 such livestreams, each flanked on screen by a chat feed. Video and photo-sharing sites such as YouTube, Vimeo, Flickr and Instagram enabled participants to post, share and discuss images of Occupy protests, police actions, and other content. Much of this activity garnered only limited interest, but some posts went viral, such as the late September videoof a high-ranking NYPD officer pepper-spraying women who had already been corralled on the sidewalk. Edited and annotated with the low-tech tools that support user-generated content, the video broadened awareness of and sympathy for the occupation.

As social media expanded the range of channels for participation in Occupy Wall Street, it also changed the nature of the public that joined. Extrapolating from the work of anthropologist Jeffrey S. Juris, we can contrast the network logics that predominated in summer 2011, when organizations and activists used email lists and websites to mobilize pre-existing networks, with a new set of aggregation logics that developed as the event took off. Social media engaged many thousands of people who had no pre-existing connection to social change organizations and activist networks. These virtual spaces, even more than city parks, became points of encounter where previously unrelated individuals aggregated to form popular assemblies.

Focusing on Occupy Boston, Juris suggests that while the alter-globalization protests of the 1990s created “temporary performative terrains along which networks made themselves and their struggles visible,” the Occupy movement activated a wider public. “Rather than providing spaces for particular networks to coordinate actions and physically represent themselves,” he writes, “the smart mob protests facilitated by social media such as Facebook and Twitter make visible crowds of individuals aggregated within concrete locales.” [18]

Political scientist Stephania Milan has characterized Occupy protests as “cloud protesting,” comparing the movement to “a cloud where a set of ‘soft resources’ coexist: identities, narratives, and know-how, which facilitate mobilization,” much as social media hosted via cloud computing gives individuals the tools for “producing, selecting, punctuating, and diffusing material like tweets, posts and videos.” [19]


Top: Protest sign in Times Square: “Get off the internet. I’ll Meet you in the streets.” [Photo by Geoff Stearns] Bottom: Collaboratively edited User Map at OccupyWallSt.org.

Though Milan and Juris don’t address them, we could add crowdmaps to the list of “cloud tools” that activated aggregation logics in the Occupy movement. Online maps populated by user-generated content were published at Take the SquareUS Day of Rage,OccupyWallSt.org, and Occupy.net. Most used Ushahidi, free open-source crowdmapping software developed in 2008 in Kenya to support disaster relief and response efforts. By compiling data into a common geospatial framework, these crowdmaps visualized Occupy participants and camps as discrete elements that aggregated to form a global phenomenon. They associated people, texts, images and videos with particular places, constructing hypergeographies of action and potential. Animated timeline features encouraged users to visualize themselves and local events as part of a process of “#globalchange.”

The most robust crowdmap was the #OccupyMap at Occupy.net, built by the Tech Ops working group of NYCGA. It provided a web interface for reporting events such as marches, rallies and police interventions, with easy media embedding and compatibility with the Ushahidi app on iOS and Android mobile devices. It also populated automatically from Twitter: any tweet from a location-enabled device that included the hashtag #occupymap generated a geotagged report that could incorporate photos and videos via the Twitpic and Twitvid apps. By spring 2012, the map had aggregated some 900 entries from New York City into a database that could be sorted geographically and temporally, by medium and by event type — all viewable via map, timeline and photo interfaces. By pulling together disparate events and data across space and time, the #OccupyMap created a counterpublic integrated through its use of online media to contest state and corporate control of urban places.

The Occupy crowdmaps were most compelling rhetorically at larger scales, where they visualized landscapes fundamentally distinct from those visible in city streets. In counterpoint to the intense attention paid to Liberty Plaza, these virtual geographies redefined the public of Occupy Wall Street as a dispersed set of agents linked more by online communication channels than by proximity. Viewed at national scale, the red placemarker icons on the User Map at OccupyWallSt.org suggested a crowd of hot air balloons that had landed — or were preparing to take off — all across the country. In places they clustered so tightly as to create red contours marking an otherwise invisible topography of radicalism. But at the local scale, what had seemed a continuous landscape of occupiers thinned out; zooming in on Liberty Plaza, you saw only a forlorn green oblong scattered with a few markers.

Open-Source Urbanism 
While some online activists relied on corporate media such as Facebook and Twitter to reach a broad public, many made a point of using open-source software, sources and methods such as wikicoding. Occupy websites became spaces for the elaboration of what Christopher Kelty calls a recursive public, “a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence.” [20] In the physical realm, Liberty Plaza and other occupied spaces functioned as offline analogues of a wiki page. Participants without much prior affiliation built new worlds and organized themselves to maintain them while avoiding hierarchy and formalization whenever they could. At these “wikicamps,” open-source urbanism operated at a scale simultaneously local and global. [21] The New York camp was built with knowledge, idea and resources from Spain and Argentina, Chiapas and Cairo, as well as from local coalitions.


Jonathan Massey and Brett Snyder map Liberty Plaza’s functional zones and activities. See the sidebar  “Mapping Liberty Plaza” for axonometric drawings of the site’s transformation.

Participants have continued to explore the ways that digital media can reshape our public spaces and public spheres. One example is a course project at The New School that emerged from a multi-day, multi-city “hackathon” sponsored by the working group Occupy Research. The Twitter bot @OccupyPOPS is a script that cross-references check-ins on social media sites Foursquare and Twitter with the New York City government database of privately-owned public spaces, then automatically tweets a call to temporarily occupy a particular POPS at a specific date and time. Created by Christo de Klerk, @OccupyPOPS mobilizes virtual spaces, physical places and social networks to reshape urban public space and the regulations that govern it. Other New York-based projects addressing the issues foregrounded by Occupy include #whOWNSpace and The Public School, as well as pre-existing initiatives like Not an Alternative.

Open-source hypercity urbanism becomes increasingly important as governments constrain public assembly in the offline world. On November 15, the state cleared the experimental agora at Liberty Plaza. Police and sanitation workers with bulldozers removed tents and tarps while resisting occupiers fell back to the People’s Kitchen. As NYPD blockaded the surrounding streets and airspace, people and texts and media feeds streamed out from an atmosphere made toxic by chemical and sonic weapons. Coordinated police actions evicted occupiers in Oakland, Portland, Denver and other cities.

Occupy Wall Street working groups and General Assemblies continue to meet in the 60 Wall Street atrium and other public locations, and to stage intermittent marches, rallies and actions. Occupations were sustained in other cities around the world, and activists tried several times to retake Zuccotti Park. Without its base camp, the Occupy movement relied even more extensively on websites and other online media as its primary means of communication and self-representation. This activity expanded into an array of diffuse campaigns: to reduce and renegotiate student debt; to resist foreclosures and reclaim bank-owned houses; and to challenge corporate power on many fronts.


Top: Sign posted at the 60 Wall atrium on November 15: “No excessive use of space.” [Photo byJohanna Clear] Bottom: Protesters remove police barriers and reoccupy Zuccotti Park on November 17. [Photo by Brennan Cavanaugh]

Occupy Wall Street had an immediate impact on U.S. domestic politics. Counteracting anti-deficit rhetoric from the Republican Party and Tea Party activists who sought to cut social services while borrowing heavily to fund wars and regressive income redistributions, the Occupy movement shifted the focus of mainstream political discourse to income inequality and the burdens of consumer debt. For many participants and observers, though, its more compelling achievement was to embody a minimally hierarchical communitarian polity that combined consensual direct democracy with a high degree of individual autonomy, and also a voluntary sharing economy with the market logics and state service provision that dominate everyday urban life. The longer-term impact of #OWS may well stem from the techniques it modeled online and in the streets for building new publics and polities.

What might this history mean for the future of public space and political action? Events are still unfolding, so the question is open-ended. But here are some provisional conclusions:

  • Online tools are rapidly changing the dynamics of political action. The aggregative, rhizomatic, and exponentially expanding character of the Occupy movement reflects the distinctive capacities of social media.
  • Media are accelerating the pace of discourse and action. Flash mobs and viral tweets may be excessively hyped, but the compressed temporality of the new media landscape is reflected in the rapid emergence, metastasis, and dormancy of Occupy Wall Street.
  • Digital communities are good at building systems. Wikicoding and other modes of online collaboration can build online venues fast and well.
  • These communities may still require face-to-face interaction to achieve substantive change. Digital communication is easy, but for that reason it can feel too light and weightless to mobilize people for the tenacious action it often takes to achieve deep structural changes.
  • Bodies in the street still matter for commanding attention and galvanizing engagement.
  • Modern forms of police control violate basic civil liberties. From the constraints placed on all manner of public assembly to the everyday civil rights violations of the stop-and-frisk system, police in New York and some other American cities have passed a dangerous tipping point.
  • Asserting a broad right to the city means claiming public places, online and offline, for assembly, dialogue and deliberation by multiple publics with varying spatial and temporal requirements.
  • Privately owned public spaces offer platforms for experimentation. The prevalence of corporate enclaves in our cities and online often homogenizes and constrains public life, but Occupy Wall Street showed that POPS can be sites for public-making and political action.
  • But users should reclaim some of the value we create in using corporate media. Activists should find ways to gain at least partial control over the valuable and revealing information trails that users generate through activity online and in our cities.

Finally, initiative is shifting to global-local coalitions. While Occupy was often framed in nationalist terms, its more pervasive character was simultaneously transnational and highly local, reflecting the new geographies of capitalism and its media. The intersections between global and local, online and face-to-face, reformist and radical are promising sites for the creation of the new publics and polities that might open up futures beyond the neoliberal state.


Editors’ Note
 

See the sidebar “Mapping Liberty Plaza” for axonometric drawings of the site’s transformation, by Jonathan Massey and Brett Snyder.For related content on Places, see also “Occupy: What Architecture Can Do” and “Occupy: The Day After,” by Reinhold Martin, and “Housing and the 99 Percent,” by Jonathan Massey.

Authors’ Note 

Andrew Weigand and Grant D. Foster assisted with research and visualization for this project.

We would like to thank many colleagues who contributed research and ideas. Early discussions about Occupy Wall Street included Joy Connolly, Elise Harris, Greg Smithsimon and Jenny Uleman. Matt Boorady, Timothy Gale, Steve Klimek, Gabriella Morrone and Nathaniel Wooten contributed to the mapping and surveying of Liberty Plaza. Jennifer Altman-Lupu, Rob Daurio and Katie Gill shared Occupy Wall Street maps they had made and gathered. The Transdisciplinary Media Studio at Syracuse University supported our research with funding from a Chancellor’s Leadership Initiative.

The project benefited from feedback at two stages. The Aggregate Architectural History Collaborativeworkshopped an early version of the text. Organizers and participants in the National Endowment for the Humanities Summer Institute in Digital Humanities, “Digital Cultural Mapping,” held at UCLA in June and July 2012, helped us develop the project both intellectually and representationally. Particular thanks to organizers Todd Presner, Diane Favro and Chris Johanson, and to consultants Zoe Borovsky, Yoh Kawano, David Shepard and Elaine Sullivan, as well as Micha Cárdenas of USC.

Notes 

1. See Doug Singsen, “Autonomous Zone on Wall Street?,” Socialist Worker, October 11, 2011.
2. “#OCCUPYWALLSTREET,” Adbusters, July 31, 2011.
3. On Occupy Oakland as a counterpublic, see Allison Laubach Wright, “Counterpublic Protest and the Purpose of Occupy: Reframing the Discourse of Occupy Wall Street,” Plaza: Dialogues in Language and Literature 2.2 (Spring 2012): 138-146.
4. “Nine Arrested and Released Without Charge in Occupy Wall Street Test Run,” Occupy Wall Street, September 8, 2011. For early histories of OWS in New York, see Writers for the 99%, Occupying Wall Street: The Inside Story of an Action that Changed America (New York and London: OR Books, 2011), andOccupyScenes from Occupied America, ed. Astra Taylor, Keith Gessen, et al. (London: Verso, 2011).
5. See Alex Vitale, “NYPD and OWS: A Clash of Styles,” in OccupyScenes from Occupied America, 74-81; and Vitale, City of Disorder: How the Quality of Life Campaign Transformed New York Politics (New York: NYU Press, 2008).
6. On the POPS system, see Jerold S. Kayden et al., Privately Owned Public Spaces: The New York City Experience (John Wiley & Sons, 2000); and Benjamin Shepard and Greg Smithsimon, The Beach Beneath the Streets: Contesting New York City’s Public Spaces (Albany: Excelsior Editions/State University of New York Press, 2011), Chs. 2-3.
7. Hakim Bey, T.A.Z.: The Temporary Autonomous ZoneOntological AnarchyPoetic Terrorism (New York: Autonomedia, 1985). See also Shepard and Smithsimon, The Beach Beneath the Streets, Ch. 1.
8. Tim Ingold, “The Temporality of the Landscape,” World Archaeology, 25:2 (1993): 152-174. Thanks to Jennifer Altman-Lupu for suggesting this way of understanding Liberty Plaza.
9. The Daily Show with Jon Stewart, “Occupy Wall Street Divided,” 16 November 2011. For a more serious account, see Writers for the 99%, Occupying Wall Street, 61-67.
10. The Occupy movement online combined two modes that Sándor Végh describes as “internet-enhanced activism” and “internet-enabled activism.” See “Classifying Forms of Online Activism: The Case of Cyberprotests against the World Bank,” in Cyberactivism: Online Activism in Theory and Practice, ed. Martha McCaughey and Michael D. Ayers (Portsmouth, NH: Routledge, 2003), 71-96. These approaches constituted what we might call a digital repertory of contention. See Charles Tilly, Regimes and Repertoires (Chicago: University of Chicago Press, 2006), and Brett Rolfe, “Building an Electronic Repertoire of Contention,” Social Movement Studies 4:1 (May 2005): 65-74.
11. Jennifer Earl and Katrina Kimport call this “e-mobilization”: using the web to facilitate and coordinate in-person protest. See Digitally Enabled Social Change: Activism in the Internet Age (Cambridge: MIT Press, 2011).
12. Some commentators even used the site’s “notes” function to publish commentaries on and critiques of the movement for others to discuss and repost. See, for instance, Greg Tate’s note “Top Ten Reasons Why So Few Blackfolk Seem Down to Occupy Wall Street,” 17 October 2011.
13. See Earl and Kimport, Digitally Enabled Social Change, Introduction.
14. See Adam Weinstein, “‘We Are the 99 Percent’ Creators Revealed,” Mother Jones, 7 October 2011, and Rebecca J. Rosen, “The 99 Percent Tumblr and Self-Service History,” The Atlantic, 10 October 2011.
15. After a slow start in August 2011, participation in the 99 Percent Project spiked at the beginning of October 2011, as the Brooklyn Bridge march and arrests spread awareness of Occupy Wall Street. Activity peaked on October 20, when site managers posted 264 photos and site visitors added nearly 6,000 comments. By the end of May 2012, the project encompassed 3255 posts and more than 134,000 comments.
16. Sean McKeown, “Winter Event Tents for Liberty Plaza,” Town Planning forum, New York City General Assembly.
17. Twitter data is drawn from a dataset compiled by social analytics company PeopleBrowsr.
18. Jeffrey S. Juris, “Reflections on #Occupy Everywhere: Social media, public space, and emerging logics of aggregation,” American Ethnologist 39:2 (2012): 259-79: 260-61.
19. Stefania Milan, “Cloud Protesting: On Mobilization in Times of Social Media,” lecture, 10 February 2012 (abstract).
20. Christopher Kelty, Two Bits: The Cultural Significance of Free Software (Duke University Press, 2008). See also “Recursive Public,” The Foundation for P2P Alternatives.
21. “Wikicamps” adapts the term that sociologist Manuel Castells used to describe the camps that filled Spanish plazas beginning in May 2011. See Castells, “The Disgust Becomes a Network” (translation of “#Wikiacampadas,” La Vanguardia, 28 May 2011), trans. Hugh Green, Adbusters 97 (2 August 2011).

The Real War on Reality (New York Time)

THE STONE June 14, 2013, 12:00 pm

By PETER LUDLOW

If there is one thing we can take away from the news of recent weeks it is this: the modern American surveillance state is not really the stuff of paranoid fantasies; it has arrived.

The revelations about the National Security Agency’s PRISM data collection program have raised awareness — and understandably, concern and fears — among American and those abroad, about the reach and power of secret intelligence gatherers operating behind the facades of government and business.

Surveillance and deception are not just fodder for the next “Matrix” movie, but a real sort of epistemic warfare.

But those revelations, captivating as they are, have been partial —they primarily focus on one government agency and on the surveillance end of intelligence work, purportedly done in the interest of national security. What has received less attention is the fact that most intelligence work today is not carried out by government agencies but by private intelligence firms and that much of that work involves another common aspect of intelligence work: deception. That is, it is involved not just with the concealment of reality, but with the manufacture of it.

The realm of secrecy and deception among shadowy yet powerful forces may sound like the province of investigative reporters, thriller novelists and Hollywood moviemakers — and it is — but it is also a matter for philosophers. More accurately, understanding deception and and how it can be exposed has been a principle project of philosophy for the last 2500 years. And it is a place where the work of journalists, philosophers and other truth-seekers can meet.

In one of the most referenced allegories in the Western intellectual tradition, Plato describes a group of individuals shackled inside a cave with a fire behind them. They are able to see only shadows cast upon a wall by the people walking behind them. They mistake shadows for reality. To see things as they truly are, they need to be unshackled and make their way outside the cave. Reporting on the world as it truly is outside the cave is one of the foundational duties of philosophers.

In a more contemporary sense, we should also think of the efforts to operate in total secrecy and engage in the creation of false impressions and realities as a problem area in epistemology — the branch of philosophy concerned with the nature of knowledge. And philosophers interested in optimizing our knowledge should consider such surveillance and deception not just fodder for the next “Matrix” movie, but as real sort of epistemic warfare.


To get some perspective on the manipulative role that private intelligence agencies play in our society, it is worth examining information that has been revealed by some significant hacks in the past few years of previously secret data.

Important insight into the world these companies came from a 2010 hack by a group best known as LulzSec  (at the time the group was called Internet Feds), which targeted the private intelligence firm HBGary Federal.  That hack yielded 75,000 e-mails.  It revealed, for example, that Bank of America approached the Department of Justice over concerns about information that WikiLeaks had about it.  The Department of Justice in turn referred Bank of America to the lobbying firm Hunton and Willliams, which in turn connected the bank with a group of information security firms collectively known as Team Themis.

Team Themis (a group that included HBGary and the private intelligence and security firms Palantir Technologies, Berico Technologies and Endgame Systems) was effectively brought in to find a way to undermine the credibility of WikiLeaks and the journalist Glenn Greenwald (who recently broke the story of Edward Snowden’s leak of the N.S.A.’s Prism program),  because of Greenwald’s support for WikiLeaks. Specifically, the plan called for actions to “sabotage or discredit the opposing organization” including a plan to submit fake documents and then call out the error. As for Greenwald, it was argued that he would cave “if pushed” because he would “choose professional preservation over cause.” That evidently wasn’t the case.

Team Themis also developed a proposal for the Chamber of Commerce to undermine the credibility of one of its critics, a group called Chamber Watch. The proposal called for first creating a “false document, perhaps highlighting periodical financial information,” giving it to a progressive group opposing the Chamber, and then subsequently exposing the document as a fake to “prove that U.S. Chamber Watch cannot be trusted with information and/or tell the truth.”

(A photocopy of the proposal can be found here.)

In addition, the group proposed creating a “fake insider persona” to infiltrate Chamber Watch.  They would “create two fake insider personas, using one as leverage to discredit the other while confirming the legitimacy of the second.”

Psyops need not be conducted by nation states; they can be undertaken by anyone with the capabilities and the incentive to conduct them.

The hack also revealed evidence that Team Themis was developing a “persona management” system — a program, developed at the specific request of the United States Air Force, that allowed one user to control multiple online identities (“sock puppets”) for commenting in social media spaces, thus giving the appearance of grass roots support.  The contract was eventually awarded to another private intelligence firm.

This may sound like nothing so much as a “Matrix”-like fantasy, but it is distinctly real, and resembles in some ways the employment of “Psyops” (psychological operations), which as most students of recent American history know, have been part of the nation’s military strategy for decades. The military’s “Unconventional Warfare Training Manual” defines Psyops as “planned operations to convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals.” In other words, it is sometimes more effective to deceive a population into a false reality than it is to impose its will with force or conventional weapons.  Of course this could also apply to one’s own population if you chose to view it as an “enemy” whose “motives, reasoning, and behavior” needed to be controlled.

Psyops need not be conducted by nation states; they can be undertaken by anyone with the capabilities and the incentive to conduct them, and in the case of private intelligence contractors, there are both incentives (billions of dollars in contracts) and capabilities.


Several months after the hack of HBGary, a Chicago area activist and hacker named Jeremy Hammond successfully hacked into another private intelligence firm — Strategic Forcasting Inc., or Stratfor), and released approximately five million e-mails. This hack provided a remarkable insight into how the private security and intelligence companies view themselves vis a vis government security agencies like the C.I.A. In a 2004 e-mail to Stratfor employees, the firm’s founder and chairman George Friedman was downright dismissive of the C.I.A.’s capabilities relative to their own:  “Everyone in Langley [the C.I.A.] knows that we do things they have never been able to do with a small fraction of their resources. They have always asked how we did it. We can now show them and maybe they can learn.”

The Stratfor e-mails provided us just one more narrow glimpse into the world of the private security firms, but the view was frightening.  The leaked e-mails revealed surveillance activities to monitor protestors in Occupy Austin as well as Occupy’s relation to the environmental group Deep Green Resistance.  Staffers discussed how one of their own men went undercover (“U/C”) and inquired about an Occupy Austin General Assembly meeting to gain insight into how the group operates.

Stratfor was also involved inmonitoring activists who were seeking reparations for victims of a chemical plant disaster in Bhopal, India, including a group called Bophal Medical Appeal. But the targets also included The Yes Men, a satirical group that had humiliated Dow Chemical with a fake news conference announcing reparations for the victims.  Stratfor regularly copied several Dow officers on the minutia of activities by the two members of the Yes Men.

One intriguing e-mail revealed that the Coca-Cola company was asking Stratfor for intelligence on PETA (People for the Ethical Treatment of Animals) with Stratfor vice president for Intelligence claiming that “The F.B.I. has a classified investigation on PETA operatives. I’ll see what I can uncover.” From this one could get the impression that the F.B.I. was in effect working as a private detective Stratfor and its corporate clients.

Stratfor also had a broad-ranging public relations campaign.  The e-mails revealed numerous media companies on its payroll. While one motivation for the partnerships was presumably to have sources of intelligence, Stratfor worked hard to have soap boxes from which to project its interests. In one 2007 e-mail, it seemed that Stratfor was close to securing a regular show on NPR: “[the producer] agreed that she wants to not just get George or Stratfor on one time on NPR but help us figure the right way to have a relationship between ‘Morning Edition’ and Stratfor.”

On May 28 Jeremy Hammond pled guilty to the Stratfor hack, noting that even if he could successfully defend himself against the charges he was facing, the Department of Justice promised him that he would face the same charges in eight different districts and he would be shipped to all of them in turn.  He would become a defendant for life.  He had no choice but to plea to a deal in which he may be sentenced to 10 years in prison.  But even as he made the plea he issued a statement, saying “I did this because I believe people have a right to know what governments and corporations are doing behind closed doors. I did what I believe is right.”  (In a video interview conducted by Glenn Greenwald with Edward Snowden in Hong Kong this week, Snowden expressed a similar ethical stance regarding his actions.)

Given the scope and content of what Hammond’s hacks exposed, his supporters agree that what he did was right. In their view, the private intelligence industry is effectively engaged in Psyops against American public., engaging in “planned operations to convey selected information to [us] to influence [our] emotions, motives, objective reasoning and, ultimately, [our] behavior”? Or as the philosopher might put it, they are engaged in epistemic warfare.

The Greek word deployed by Plato in “The Cave” — aletheia — is typically translated as truth, but is more aptly translated as “disclosure” or “uncovering” —   literally, “the state of not being hidden.”   Martin Heidegger, in an essay on the allegory of the cave, suggested that the process of uncovering was actually a precondition for having truth.  It would then follow that the goal of the truth-seeker is to help people in this disclosure — it is to defeat the illusory representations that prevent us from seeing the world the way it is.  There is no propositional truth to be had until this first task is complete.

This is the key to understanding why hackers like Jeremy Hammond are held in such high regard by their supporters.  They aren’t just fellow activists or fellow hackers — they are defending us from epistemic attack.  Their actions help lift the hood that is periodically pulled over our eyes to blind us from the truth.

Peter Ludlow is a professor of philosophy at Northwestern University and is currently co-producing (with Vivien Weisman) a documentary on Hacktivist actions against private intelligence firms and the surveillance state.

Robo-Pets May Contribute to Quality of Life for Those With Dementia (Science Daily)

June 24, 2013 — Robotic animals can help to improve the quality of life for people with dementia, according to new research.

Professor Glenda Cook with PARO seal Glenda Cook with PARO seal. (Credit: Image courtesy of Northumbria University)

A study has found that interacting with a therapeutic robot companion made people with mid- to late-stage dementia less anxious and also had a positive influence on their quality of life.

The pilot study, a collaboration led by Professor Wendy Moyle from Griffith University, Australia and involving Northumbria University’s Professor Glenda Cook and researchers from institutions in Germany, investigated the effect of interacting with PARO — a robotic harp seal — compared with participation in a reading group. The study built on Professor Cook’s previous ethnographic work carried out in care homes in North East England.

PARO is fitted with artificial intelligence software and tactile sensors that allow it to respond to touch and sound. It can show emotions such as surprise, happiness and anger, can learn its own name and learns to respond to words that its owner uses frequently.

Eighteen participants, living in a residential aged care facility in Queensland, Australia, took part in activities with PARO for five weeks and also participated in a control reading group activity for the same period. Following both trial periods the impact was assessed, using recognised clinical dementia measurements, for how the activities had influenced the participants’ quality of life, tendency to wander, level of apathy, levels of depression and anxiety ratings.

The findings indicated that the robots had a positive, clinically meaningful influence on quality of life, increased levels of pleasure and also reduced displays of anxiety.

Research has already shown that interaction with animals can have a beneficial effect on older adults, increasing their social behaviour and verbal interaction and decreasing feelings of loneliness. However, the presence of animals in residential care home settings can place residents at risk of infection or injury and create additional duties for nursing staff.

This latest study suggests that PARO companions elicit a similar response and could potentially be used in residential settings to help reduce some of the symptoms — such as agitation, aggression, isolation and loneliness — of dementia.

Prof Cook, Professor of Nursing at Northumbria University, said: “Our study provides important preliminary support for the idea that robots may present a supplement to activities currently in use and could enhance the life of older adults as therapeutic companions and, in particular, for those with moderate or severe cognitive impairment.

“There is a need for further research, with a larger sample size, and an argument for investing in interventions such as PARO robots which may reduce dementia-related behaviours that make the provision of care challenging as well as costly due to increased use of staff resources and pharmaceutical treatment.”

The researchers of the pilot study have identified the need to undertake a larger trial in order to increase the data available. Future studies will also compare the effect of the robot companions with live animals.

Journal Reference:

  1. Wendy Moyle, Marie Cooke, Elizabeth Beattie, Cindy Jones, Barbara Klein, Glenda Cook, Chrystal Gray. Exploring the Effect of Companion Robots on Emotional Expression in Older Adults with Dementia: A Pilot Randomized Controlled TrialJournal of Gerontological Nursing, 2013; 39 (5): 46 DOI: 10.3928/00989134-20130313-03

The Science of Why We Don’t Believe Science (Mother Jones)

How our brains fool us on climate, creationism, and the end of the world.

By  | Mon Apr. 18, 2011 3:00 AM PDT


“A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” So wrote the celebrated Stanford University psychologist Leon Festinger [1] (PDF), in a passage that might have been referring to climate change denial—the persistent rejection, on the part of so many Americans today, of what we know about global warming and its human causes. But it was too early for that—this was the 1950s—and Festinger was actually describing a famous case study [2] in psychology.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area cult whose members thought they were communicating with aliens—including one, “Sananda,” who they believed was the astral incarnation of Jesus Christ. The group was led by Dorothy Martin, a Dianetics devotee who transcribed the interstellar messages through automatic writing.

Through her, the aliens had given the precise date of an Earth-rending cataclysm: December 21, 1954. Some of Martin’s followers quit their jobs and sold their property, expecting to be rescued by a flying saucer when the continent split asunder and a new sea swallowed much of the United States. The disciples even went so far as to remove brassieres and rip zippers out of their trousers—the metal, they believed, would pose a danger on the spacecraft.

Festinger and his team were with the cult when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) did not show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react, now that it had been soundly refuted?

Read also: the truth about Climategate. [3]. Read also: the truth about Climategate [4].

At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they’d all been spared at the last minute. Festinger summarized the extraterrestrials’ new pronouncement: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved Earth from the prophecy!

From that day forward, the Seekers, previously shy of the press and indifferent toward evangelizing, began to proselytize. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed had made them even more certain of their beliefs.

In the annals of denial, it doesn’t get much more extreme than the Seekers. They lost their jobs, the press mocked them, and there were efforts to keep them away from impressionable young minds. But while Martin’s space cult might lie at on the far end of the spectrum of human self-delusion, there’s plenty to go around. And since Festinger’s day, an array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called “motivated reasoning [5]” helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, “death panels,” the birthplace and religion of the president [6] (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.

The theory of motivated reasoning builds on a key insight of modern neuroscience [7] (PDF): Reasoning is actually suffused with emotion (or what researchers often call “affect”). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we’re aware of it. That shouldn’t be surprising: Evolution required us to react very quickly to stimuli in our environment. It’s a “basic human survival skill,” explains political scientist Arthur Lupia[8] of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.

We apply fight-or-flight reflexes not only to predators, but to data itself.

We’re not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn’t take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that’s highly biased, especially on topics we care a great deal about.

Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber [9] of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”

In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt [10]: We may think we’re being scientists, but we’re actually being lawyers [11] (PDF). Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

That’s a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully, I can go to great lengths to explain away behavior that seems obvious to everybody else—everybody who isn’t too emotionally invested to accept it, anyway. That’s not to suggest that we aren’t also motivated to perceive the world accurately—we are. Or that we never change our minds—we do. It’s just that we have other important goals besides accuracy—including identity affirmation and protecting one’s sense of self—and often those make us highly resistant to changing our beliefs when the facts say we should.

Modern science originated from an attempt to weed out such subjective lapses—what that great 17th century theorist of the scientific method, Francis Bacon, dubbed the “idols of the mind.” Even if individual researchers are prone to falling in love with their own theories, the broader processes of peer review and institutionalized skepticism are designed to ensure that, eventually, the best ideas prevail.

Scientific evidence is highly susceptible to misinterpretation. Giving ideologues scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.

Our individual responses to the conclusions that science reaches, however, are quite another matter. Ironically, in part because researchers employ so much nuance and strive to disclose all remaining sources of uncertainty, scientific evidence is highly susceptible to selective reading and misinterpretation. Giving ideologues or partisans scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.

Sure enough, a large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs. In a classic 1979 experiment [12] (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies—and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” about affirmative action, gun control, the accuracy of gay stereotypes [13], and much else. Even when study subjects are explicitly instructed to be unbiased and even-handed about the evidence, they often fail.

And it’s not just that people twist or selectively read scientific evidence to support their preexisting views. According to research by Yale Law School professor Dan Kahan [14] and his colleagues, people’s deep-seated views about morality, and about the way society should be ordered, strongly predict whom they consider to be a legitimate scientific expert in the first place—and thus where they consider “scientific consensus” to lie on contested issues.

In Kahan’s research [15] (PDF), individuals are classified, based on their cultural values, as either “individualists” or “communitarians,” and as either “hierarchical” or “egalitarian” in outlook. (Somewhat oversimplifying, you can think of hierarchical individualists as akin to conservative Republicans, and egalitarian communitarians as liberal Democrats.) In one study, subjects in the different groups were asked to help a close friend determine the risks associated with climate change, sequestering nuclear waste, or concealed carry laws: “The friend tells you that he or she is planning to read a book about the issue but would like to get your opinion on whether the author seems like a knowledgeable and trustworthy expert.” A subject was then presented with the résumé of a fake expert “depicted as a member of the National Academy of Sciences who had earned a Ph.D. in a pertinent field from one elite university and who was now on the faculty of another.” The subject was then shown a book excerpt by that “expert,” in which the risk of the issue at hand was portrayed as high or low, well-founded or speculative. The results were stark: When the scientist’s position stated that global warming is real and human-caused, for instance, only 23 percent of hierarchical individualists agreed the person was a “trustworthy and knowledgeable expert.” Yet 88 percent of egalitarian communitarians accepted the same scientist’s expertise. Similar divides were observed on whether nuclear waste can be safely stored underground and whether letting people carry guns deters crime. (The alliances did not always hold. Inanother study [16] (PDF), hierarchs and communitarians were in favor of laws that would compel the mentally ill to accept treatment, whereas individualists and egalitarians were opposed.)

Head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.

In other words, people rejected the validity of a scientific source because its conclusion contradicted their deeply held views—and thus the relative risks inherent in each scenario. A hierarchal individualist finds it difficult to believe that the things he prizes (commerce, industry, a man’s freedom to possess a gun to defend his family [16]) (PDF) could lead to outcomes deleterious to society. Whereas egalitarian communitarians tend to think that the free market causes harm, that patriarchal families mess up kids, and that people can’t handle their guns. The study subjects weren’t “anti-science”—not in their own minds, anyway. It’s just that “science” was whatever they wanted it to be. “We’ve come to a misadventure, a bad situation where diverse citizens, who rely on diverse systems of cultural certification, are in conflict,” says Kahan [17].

And that undercuts the standard notion that the way to persuade people is via evidence and argument. In fact, head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.

Take, for instance, the question of whether Saddam Hussein possessed hidden weapons of mass destruction just before the US invasion of Iraq in 2003. When political scientists Brendan Nyhan and Jason Reifler showed subjects fake newspaper articles [18] (PDF) in which this was first suggested (in a 2004 quote from President Bush) and then refuted (with the findings of the Bush-commissioned Iraq Survey Group report, which found no evidence of active WMD programs in pre-invasion Iraq), they found that conservatives were more likely than before to believe the claim. (The researchers also tested how liberals responded when shown that Bush did not actually “ban” embryonic stem-cell research. Liberals weren’t particularly amenable to persuasion, either, but no backfire effect was observed.)

Another study gives some inkling of what may be going through people’s minds when they resist persuasion. Northwestern University sociologist Monica Prasad [19] and her colleagues wanted to test whether they could dislodge the notion that Saddam Hussein and Al Qaeda were secretly collaborating among those most likely to believe it—Republican partisans from highly GOP-friendly counties. So the researchers set up a study [20] (PDF) in which they discussed the topic with some of these Republicans in person. They would cite the findings of the 9/11 Commission, as well as a statement in which George W. Bush himself denied his administration had “said the 9/11 attacks were orchestrated between Saddam and Al Qaeda.”

One study showed that not even Bush’s own words could change the minds of Bush voters who believed there was an Iraq-Al Qaeda link.

As it turned out, not even Bush’s own words could change the minds of these Bush voters—just 1 of the 49 partisans who originally believed the Iraq-Al Qaeda claim changed his or her mind. Far more common was resisting the correction in a variety of ways, either by coming up with counterarguments or by simply being unmovable:

Interviewer: [T]he September 11 Commission found no link between Saddam and 9/11, and this is what President Bush said. Do you have any comments on either of those?

Respondent: Well, I bet they say that the Commission didn’t have any proof of it but I guess we still can have our opinions and feel that way even though they say that.

The same types of responses are already being documented on divisive topics facing the current administration. Take the “Ground Zero mosque.” Using information from the political myth-busting site FactCheck.org [21], a team at Ohio State presented subjects [22] (PDF) with a detailed rebuttal to the claim that “Feisal Abdul Rauf, the Imam backing the proposed Islamic cultural center and mosque, is a terrorist-sympathizer.” Yet among those who were aware of the rumor and believed it, fewer than a third changed their minds.

A key question—and one that’s difficult to answer—is how “irrational” all this is. On the one hand, it doesn’t make sense to discard an entire belief system, built up over a lifetime, because of some new snippet of information. “It is quite possible to say, ‘I reached this pro-capital-punishment decision based on real information that I arrived at over my life,'” explains Stanford social psychologist Jon Krosnick [23]. Indeed, there’s a sense in which science denial could be considered keenly “rational.” In certain conservative communities, explains Yale’s Kahan, “People who say, ‘I think there’s something to climate change,’ that’s going to mark them out as a certain kind of person, and their life is going to go less well.”

This may help explain a curious pattern Nyhan and his colleagues found when they tried to test the fallacy [6] (PDF) that President Obama is a Muslim. When a nonwhite researcher was administering their study, research subjects were amenable to changing their minds about the president’s religion and updating incorrect views. But when only white researchers were present, GOP survey subjects in particular were more likely to believe the Obama Muslim myth than before. The subjects were using “social desirabililty” to tailor their beliefs (or stated beliefs, anyway) to whoever was listening.

Which leads us to the media. When people grow polarized over a body of evidence, or a resolvable matter of fact, the cause may be some form of biased reasoning, but they could also be receiving skewed information to begin with—or a complicated combination of both. In the Ground Zero mosque case, for instance, a follow-up study [24] (PDF) showed that survey respondents who watched Fox News were more likely to believe the Rauf rumor and three related ones—and they believed them more strongly than non-Fox watchers.

Okay, so people gravitate toward information that confirms what they believe, and they select sources that deliver it. Same as it ever was, right? Maybe, but the problem is arguably growing more acute, given the way we now consume information—through the Facebook links of friends, or tweets that lack nuance or context, or “narrowcast [25]” and often highly ideological media that have relatively small, like-minded audiences. Those basic human survival skills of ours, says Michigan’s Arthur Lupia, are “not well-adapted to our information age.”

A predictor of whether you accept the science of global warming? Whether you’re a Republican or a Democrat.

If you wanted to show how and why fact is ditched in favor of motivated reasoning, you could find no better test case than climate change. After all, it’s an issue where you have highly technical information on one hand and very strong beliefs on the other. And sure enough, one key predictor of whether you accept the science of global warming is whether you’re a Republican or a Democrat. The two groups have been growing more divided in their views about the topic, even as the science becomes more unequivocal.

So perhaps it should come as no surprise that more education doesn’t budge Republican views. On the contrary: In a 2008 Pew survey [26], for instance, only 19 percent of college-educated Republicans agreed that the planet is warming due to human actions, versus 31 percent of non-college educated Republicans. In other words, a higher education correlated with an increased likelihood of denying the science on the issue. Meanwhile, among Democrats and independents, more education correlated with greater acceptance of the science.

Other studies have shown a similar effect: Republicans who think they understand the global warming issue best are least concerned about it; and among Republicans and those with higher levels of distrust of science in general, learning more about the issue doesn’t increase one’s concern about it. What’s going on here? Well, according to Charles Taber and Milton Lodge of Stony Brook, one insidious aspect of motivated reasoning is that political sophisticates are prone to be more biased than those who know less about the issues. “People who have a dislike of some policy—for example, abortion—if they’re unsophisticated they can just reject it out of hand,” says Lodge. “But if they’re sophisticated, they can go one step further and start coming up with counterarguments.” These individuals are just as emotionally driven and biased as the rest of us, but they’re able to generate more and better reasons to explain why they’re right—and so their minds become harder to change.

That may be why the selectively quoted emails of Climategate were so quickly and easily seized upon by partisans as evidence of scandal. Cherry-picking is precisely the sort of behavior you would expect motivated reasoners to engage in to bolster their views—and whatever you may think about Climategate, the emails were a rich trove of new information upon which to impose one’s ideology.

Climategate had a substantial impact on public opinion, according to Anthony Leiserowitz [27], director of the Yale Project on Climate Change Communication [28]. It contributed to an overall drop in public concern about climate change and a significant loss of trust in scientists. But—as we should expect by now—these declines were concentrated among particular groups of Americans: Republicans, conservatives, and those with “individualistic” values. Liberals and those with “egalitarian” values didn’t lose much trust in climate science or scientists at all. “In some ways, Climategate was like a Rorschach test,” Leiserowitz says, “with different groups interpreting ambiguous facts in very different ways.”

Is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism.

So is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism. Its most famous proponents are an environmentalist (Robert F. Kennedy Jr. [29]) and numerous Hollywood celebrities (most notably Jenny McCarthy [30] and Jim Carrey). TheHuffington Post gives a very large megaphone to denialists. And Seth Mnookin [31], author of the new book The Panic Virus [32], notes that if you want to find vaccine deniers, all you need to do is go hang out at Whole Foods.

Vaccine denial has all the hallmarks of a belief system that’s not amenable to refutation. Over the past decade, the assertion that childhood vaccines are driving autism rateshas been undermined [33] by multiple epidemiological studies—as well as the simple fact that autism rates continue to rise, even though the alleged offending agent in vaccines (a mercury-based preservative called thimerosal) has long since been removed.

Yet the true believers persist—critiquing each new study that challenges their views, and even rallying to the defense of vaccine-autism researcher Andrew Wakefield, afterhis 1998 Lancet paper [34]—which originated the current vaccine scare—was retracted and he subsequently lost his license [35] (PDF) to practice medicine. But then, why should we be surprised? Vaccine deniers created their own partisan media, such as the website Age of Autism, that instantly blast out critiques and counterarguments whenever any new development casts further doubt on anti-vaccine views.

It all raises the question: Do left and right differ in any meaningful way when it comes to biases in processing information, or are we all equally susceptible?

There are some clear differences. Science denial today is considerably more prominent on the political right—once you survey climate and related environmental issues, anti-evolutionism, attacks on reproductive health science by the Christian right, and stem-cell and biomedical matters. More tellingly, anti-vaccine positions are virtually nonexistent among Democratic officeholders today—whereas anti-climate-science views are becoming monolithic among Republican elected officials.

Some researchers have suggested that there are psychological differences between the left and the right that might impact responses to new information—that conservatives are more rigid and authoritarian, and liberals more tolerant of ambiguity. Psychologist John Jost of New York University has further argued that conservatives are “system justifiers”: They engage in motivated reasoning to defend the status quo.

This is a contested area, however, because as soon as one tries to psychoanalyze inherent political differences, a battery of counterarguments emerges: What about dogmatic and militant communists? What about how the parties have differed through history? After all, the most canonical case of ideologically driven science denial is probably the rejection of genetics in the Soviet Union, where researchers disagreeing with the anti-Mendelian scientist (and Stalin stooge) Trofim Lysenko were executed, and genetics itself was denounced as a “bourgeois” science and officially banned.

The upshot: All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?

We all have blinders in some situations. The question then becomes: What can be done to counteract human nature?

Given the power of our prior beliefs to skew how we respond to new information, one thing is becoming clear: If you want someone to accept new evidence, make sure to present it to them in a context that doesn’t trigger a defensive, emotional reaction.

This theory is gaining traction in part because of Kahan’s work at Yale. In one study [36], he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—”Scientific Panel Recommends Anti-Pollution Solution to Global Warming” and “Scientific Panel Recommends Nuclear Solution to Global Warming”—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.

You can follow the logic to its conclusion: Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a “culture war of fact.” In other words, paradoxically, you don’t lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.


Links:
[1] https://motherjones.com/files/lfestinger.pdf
[2] http://www.powells.com/biblio/61-9781617202803-1
[3] http://motherjones.com/environment/2011/04/history-of-climategate
[4] http://motherjones.com/environment/2011/04/field-guide-climate-change-skeptics
[5] http://www.ncbi.nlm.nih.gov/pubmed/2270237
[6] http://www-personal.umich.edu/~bnyhan/obama-muslim.pdf
[7] https://motherjones.com/files/descartes.pdf
[8] http://www-personal.umich.edu/~lupia/
[9] http://www.stonybrook.edu/polsci/ctaber/
[10] http://people.virginia.edu/~jdh6n/
[11] https://motherjones.com/files/emotional_dog_and_rational_tail.pdf
[12] http://synapse.princeton.edu/~sam/lord_ross_lepper79_JPSP_biased-assimilation-and-attitude-polarization.pdf
[13] http://psp.sagepub.com/content/23/6/636.abstract
[14] http://www.law.yale.edu/faculty/DKahan.htm
[15] https://motherjones.com/files/kahan_paper_cultural_cognition_of_scientific_consesus.pdf
[16] http://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1095&context=fss_papers
[17] http://seagrant.oregonstate.edu/blogs/communicatingclimate/transcripts/Episode_10b_Dan_Kahan.html
[18] http://www-personal.umich.edu/~bnyhan/nyhan-reifler.pdf
[19] http://www.sociology.northwestern.edu/faculty/prasad/home.html
[20] http://sociology.buffalo.edu/documents/hoffmansocinquiryarticle_000.pdf
[21] http://www.factcheck.org/
[22] http://www.comm.ohio-state.edu/kgarrett/FactcheckMosqueRumors.pdf
[23] http://communication.stanford.edu/faculty/krosnick/
[24] http://www.comm.ohio-state.edu/kgarrett/MediaMosqueRumors.pdf
[25] http://en.wikipedia.org/wiki/Narrowcasting
[26] http://people-press.org/report/417/a-deeper-partisan-divide-over-global-warming
[27] http://environment.yale.edu/profile/leiserowitz/
[28] http://environment.yale.edu/climate/
[29] http://www.huffingtonpost.com/robert-f-kennedy-jr-and-david-kirby/vaccine-court-autism-deba_b_169673.html
[30] http://www.huffingtonpost.com/jenny-mccarthy/vaccine-autism-debate_b_806857.html
[31] http://sethmnookin.com/
[32] http://www.powells.com/biblio/1-9781439158647-0
[33] http://discovermagazine.com/2009/jun/06-why-does-vaccine-autism-controversy-live-on/article_print
[34] http://www.thelancet.com/journals/lancet/article/PIIS0140673697110960/fulltext
[35] http://www.gmc-uk.org/Wakefield_SPM_and_SANCTION.pdf_32595267.pdf
[36] http://www.scribd.com/doc/3446682/The-Second-National-Risk-and-Culture-Study-Making-Sense-of-and-Making-Progress-In-The-American-Culture-War-of-Fact

Climate change poses grave threat to security, says UK envoy (The Guardian)

Rear Admiral Neil Morisetti, special representative to foreign secretary, says governments can’t afford to wait for 100% certainty

The Guardian, Sunday 30 June 2013 18.19 BST

Flooding in Thailand in 2011

Flooding in Thailand in 2011. Photograph: Narong Sangnak/EPA

Climate change poses as grave a threat to the UK’s security and economic resilience as terrorism and cyber-attacks, according to a senior military commander who was appointed as William Hague’s climate envoy this year.

In his first interview since taking up the post, Rear Admiral Neil Morisetti said climate change was “one of the greatest risks we face in the 21st century”, particularly because it presented a global threat. “By virtue of our interdependencies around the world, it will affect all of us,” he said.

He argued that climate change was a potent threat multiplier at choke points in the global trade network, such as the Straits of Hormuz, through which much of the world’s traded oil and gas is shipped.

Morisetti left a 37-year naval career to become the foreign secretary’s special representative for climate change, and represents the growing influence of hard-headed military thinking in the global warming debate.

The link between climate change and global security risks is on the agenda of the UK’s presidency of the G8, including a meeting to be chaired by Morissetti in July that will include assessment of hotspots where climate stress is driving migration.

Morisetti’s central message was simple and stark: “The areas of greatest global stress and greatest impacts of climate change are broadly coincidental.”

He said governments could not afford to wait until they had all the information they might like. “If you wait for 100% certainty on the battlefield, you’ll be in a pretty sticky state,” he said.

The increased threat posed by climate change arises because droughts, storms and floods are exacerbating water, food, population and security tensions in conflict-prone regions.

“Just because it is happening 2,000 miles away does not mean it is not going to affect the UK in a globalised world, whether it is because food prices go up, or because increased instability in an area – perhaps around the Middle East or elsewhere – causes instability in fuel prices,” Morisetti said.

“In fact it is already doing so,” he added, noting that Toyota’s UK car plants had been forced to switch to a three-day week after extreme floods in Thailand cut the supply chain. Computer firms in California and Poland were left short of microchips by the same floods.

Morisetti is far from the only military figure emphasising the climate threat to security. America’s top officer tackling the threat from North Korea and China has said the biggest long-term security issue in the region is climate change.

In a recent interview, Admiral Samuel J Locklear III, who led the US naval action in Libya that helped topple Muammar Gaddafi, said a significant event related to the warming planet was “the most likely thing that is going to happen that will cripple the security environment, probably more likely than the other scenarios we all often talk about”.

There is a reason why the military are so clear-headed about the climate threat, according to Professor John Schellnhuber, a scientist who briefed the UN security council on the issue in February and formerly advised the German chancellor, Angela Merkel.

“The military do not deal with ideology. They cannot afford to: they are responsible for the lives of people and billions of pounds of investment in equipment,” he said. “When the climate change deniers took their stance after the Copenhagen summit in 2009, it is very interesting that the military people were never shaken from the idea that we are about to enter a very difficult period.”

He added: “This danger of the creation of violent conflicts is the strongest argument why we should keep climate change under control, because the international system is not stable, and the slightest thing, like the food riots in the Middle East, could make the whole system explode.”

The military has been quietly making known its concern about the climate threat to security for some time. General Wesley Clark, who commanded the Nato bombing of Yugoslavia during the Kosovo war, said in 2005: “Stopping global warming is not just about saving the environment, it’s about securing America for our children and our children’s children, as well.”

In the same year Chuck Hagel, now Obama’s defence secretary, said: “I don’t think you can separate environmental policy from economic policy or energy policy.”

Morisetti said there was also a direct link between climate change and the military because of the latter’s huge reliance on fossil fuels. “In Afghanistan, where we have had to import all our energy into the country along a single route that has been disrupted, the US military have calculated that for every 24 convoys there has been a casualty. There is a cost associated in bringing in that energy in both blood and treasure.

“So to drive up efficiency and to use alternative fuels, wind, solar, makes eminent sense to the military,” he said, noting that the use of solar blankets in Afghanistan meant fewer fuel resupply missions. “The principles of delivering your outputs more effectively, reducing your risks and reducing your costs reads across far more widely than just the military: most businesses would be looking for that too.”

Morisetti’s former employer, the Ministry of Defence, agrees that the climate threat is a serious one. The last edition of the Global Strategic Trends analysis published by the MoD’s Development, Concepts and Doctrine Centre concludes: “Climate change will amplify existing social, political and resource stresses, shifting the tipping point at which conflict ignites … Out to 2040, there are few convincing reasons to suggest that the world will become more peaceful.”

Schellnhuber was also clear about the consequences of failing to curb global warming. “The last 11,000 years – the Holocene – was characterised by the extreme stability of global climate. It is the only period when human civilisation could have developed at all,” he said. “But I don’t think a global, interconnected world can be managed in peace if climate change means we are leaving the Holocene. Let’s pray we will have a Lincoln or a Gorbachev to lead us.”

Centro brasileiro aumenta em quatro vezes a precisão da previsão do tempo (JC/O Globo)

JC e-mail 4746, de 13 de Junho de 2013.

Novo modelo do CPTEC, que usa o supercomputador Tupã, consegue mapear com resolução de cinco quilômetros quadrados. Reportagem de O Globo

Os olhos da previsão do tempo no Brasil passaram a enxergar melhor. Com quatro vezes mais precisão, mais precisamente. O Centro de Previsão do Tempo e Estudos Climáticos (CPTEC/INPE) lançou uma atualização do modelo Brams de previsão, turbinado agora pela alta capacidade de processamento do supercomputador Tupã, instalado em Cachoeira Paulista. Antes, o Brams fazia previsões de até uma semana com nitidez de 20 quilômetros quadrados. Agora, a resolução é de 5 quilômetros quadrados para os mesmos sete dias.

Com a nova versão, o nível de detalhe da previsão, que antes se limitava a uma cidade ou região, desta vez consegue diferenciar um bairro do outro. A consulta ao novo modelo meteorológico é gratuita e está disponível no site do CPTEC.

Para cobrir toda a América do Sul, o Brams dividiu o território como num grande jogo de batalha naval, com 1360 por 1480 células de área. Como é um modelo em três dimensões, há também 55 níveis verticais para cada uma destas células. No total, são 110 milhões de pontos, processados simultaneamente nos 9.600 processadores do Tupã.

Segundo o CPTEC, a versão 5.0 do Brams coloca o Brasil em posição de competitividade com os principais centros operacionais do mundo. O centro de previsão do National Centers for Environmental Prediction (NCEP), por exemplo, gera previsões a partir de um modelo similar – o National Mesoscale Model – de 4 quilômetros, 70 níveis verticais e grade de 1371 x 1100 células, que cobre toda a região continental dos Estados Unidos.

Para desenvolver esta nova versão do modelo BRAMS, também utilizado para a previsão e monitoramento da poluição do ar, são usados dados de estações meteorológicas de todo o país, de satélites, boias oceânicas e imagens de avião.

http://oglobo.globo.com/ciencia/centro-brasileiro-aumenta-em-quatro-vezes-precisao-da-previsao-do-tempo-8667823#ixzz2W6YkxWkF

* * *

Inpe lança modelo de previsão de tempo com altíssima resolução

O novo padrão cobre toda a América do Sul

Uma nova versão do modelo regional Brams de previsão de tempo, que cobre toda a América do Sul, foi lançada pelo Centro de Previsão do Tempo e Estudos Climáticos (CPTEC)do Instituto Nacional de Pesquisas Espaciais (Inpe/MCTI). O Brams, versão 5.0, já está operacional para até sete dias.

O modelo gera previsões com resolução espacial de 5 quilômetros, enquanto o anterior fornecia previsões com resolução de 20 quilômetros. O avanço só foi possível devido À alta capacidade de processamento do novo supercomputador Cray, do Inpe, o Tupã, instalado no CPTEC, em Cachoeira Paulista (SP).

Os desenvolvimentos para tornar a nova versão do Brams operacional levaram cerca de um ano. Para cobrir toda a extensão da América do Sul, foram necessárias 1.360 x 1.480 células horizontais e 55 níveis verticais. As células de grade, num total de 110 milhões, aproximadamente, são processadas simultaneamente nos 9.600 processadores do Cray, em computação paralela.

Este esforço coordenado pelo Grupo de Modelagem Atmosférica e Interfaces (Gmai) colocou o CPTEC em posição de competitividade em relação aos principais centros operacionais do mundo. O centro de previsão do National Centers for Environmental Prediction (NCEP), por exemplo, gera previsões a partir de um modelo similar – o National Mesoscale Model – de 4 quilômetros, 70 níveis verticais e grade de 1.371 x 1.100 células, que cobre toda a região continental dos Estados Unidos.

Para desenvolver esta nova versão do modelo Brams, também utilizado para a previsão e monitoramento da poluição do ar, utilizou-se um modelo não hidrostático, que representa com maior precisão processos físicos de menor escala, como o desenvolvimento e a dissipação de nuvens e chuvas. Diversos avanços em parametrização (representações matemáticas de processos físicos) foram realizados para nuvens, radiação solar e processos e dinâmicas de superfície.

(Ascom do Inpe)

When Will My Computer Understand Me? (Science Daily)

June 10, 2013 — It’s not hard to tell the difference between the “charge” of a battery and criminal “charges.” But for computers, distinguishing between the various meanings of a word is difficult.

A “charge” can be a criminal charge, an accusation, a battery charge, or a person in your care. Some of those meanings are closer together, others further apart. (Credit: Image courtesy of University of Texas at Austin, Texas Advanced Computing Center)

For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Driven initially by efforts to translate Russian scientific texts during the Cold War (and more recently by the value of information retrieval and data analysis tools), these efforts have met with mixed success. IBM’s Jeopardy-winningWatson system and Google Translate are high profile, successful applications of language technologies, but the humorous answers and mistranslations they sometimes produce are evidence of the continuing difficulty of the problem.

Our ability to easily distinguish between multiple word meanings is rooted in a lifetime of experience. Using the context in which a word is used, an intrinsic understanding of syntax and logic, and a sense of the speaker’s intention, we intuit what another person is telling us.

“In the past, people have tried to hand-code all of this knowledge,” explained Katrin Erk, a professor of linguistics at The University of Texas at Austin focusing on lexical semantics. “I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know.”

Other efforts have tried to use dictionary meanings to train computers to better understand language, but these attempts have also faced obstacles. Dictionaries have their own sense distinctions, which are crystal clear to the dictionary-maker but murky to the dictionary reader. Moreover, no two dictionaries provide the same set of meanings — frustrating, right?

Watching annotators struggle to make sense of conflicting definitions led Erk to try a different tactic. Instead of hard-coding human logic or deciphering dictionaries, why not mine a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a weighted map of relationships — a dictionary without a dictionary?

“An intuition for me was that you could visualize the different meanings of a word as points in space,” she said. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.

“The lower end for this kind of a research is a text collection of 100 million words,” she explained. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers and Hadoop come in.”

Applying Computational Horsepower

Erk initially conducted her research on desktop computers, but around 2009, she began using the parallel computing systems at the Texas Advanced Computing Center (TACC). Access to a special Hadoop-optimized subsystem on TACC’s Longhornsupercomputer allowed Erk and her collaborators to expand the scope of their research. Hadoop is a software architecture well suited to text analysis and the data mining of unstructured data that can also take advantage of large computer clusters. Computational models that take weeks to run on a desktop computer can run in hours on Longhorn. This opened up new possibilities.

“In a simple case we count how often a word occurs in close proximity to other words. If you’re doing this with one billion words, do you have a couple of days to wait to do the computation? It’s no fun,” Erk said. “With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster. That enabled us to use larger amounts of data and develop better models.”

Treating words in a relational, non-fixed way corresponds to emerging psychological notions of how the mind deals with language and concepts in general, according to Erk. Instead of rigid definitions, concepts have “fuzzy boundaries” where the meaning, value and limits of the idea can vary considerably according to the context or conditions. Erk takes this idea of language and recreates a model of it from hundreds of thousands of documents.

Say That Another Way

So how can we describe word meanings without a dictionary? One way is to use paraphrases. A good paraphrase is one that is “close to” the word meaning in that high-dimensional space that Erk described.

“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” Erk explained. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”

Language technology already helps millions of people perform practical and valuable tasks every day via web searches and question-answer systems, but it is poised for even more widespread applications.

Automatic information extraction is an application where Erk’s paraphrasing research may be critical. Say, for instance, you want to extract a list of diseases, their causes, symptoms and cures from millions of pages of medical information on the web.

“Researchers use slightly different formulations when they talk about diseases, so knowing good paraphrases would help,” Erk said.

In a paper to appear in ACM Transactions on Intelligent Systems and Technology, Erk and her collaborators illustrated they could achieve state-of-the-art results with their automatic paraphrasing approach.

Recently, Erk and Ray Mooney, a computer science professor also at The University of Texas at Austin, were awarded a grant from the Defense Advanced Research Projects Agency to combine Erk’s distributional, high dimensional space representation of word meanings with a method of determining the structure of sentences based on Markov logic networks.

“Language is messy,” said Mooney. “There is almost nothing that is true all the time. “When we ask, ‘How similar is this sentence to another sentence?’ our system turns that question into a probabilistic theorem-proving task and that task can be very computationally complex.”

In their paper, “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics (STARSEM2013) in June, Erk, Mooney and colleagues announced their results on a number of challenge problems from the field of artificial intelligence.

In one problem, Longhorn was given a sentence and had to infer whether another sentence was true based on the first. Using an ensemble of different sentence parsers, word meaning models and Markov logic implementations, Mooney and Erk’s system predicted the correct answer with 85% accuracy. This is near the top results in this challenge. They continue to work to improve the system.

There is a common saying in the machine-learning world that goes: “There’s no data like more data.” While more data helps, taking advantage of that data is key.

“We want to get to a point where we don’t have to learn a computer language to communicate with a computer. We’ll just tell it what to do in natural language,” Mooney said. “We’re still a long way from having a computer that can understand language as well as a human being does, but we’ve made definite progress toward that goal.”

You’re So Vain: Study Links Social Media Use and Narcissism (Science Daily)

June 11, 2013 — Facebook is a mirror and Twitter is a megaphone, according to a new University of Michigan study exploring how social media reflect and amplify the culture’s growing levels of narcissism.

New research shows that narcissistic college students and their adult counterparts use social media in different ways to boost their egos and control others’ perceptions of them. (Credit: © mtkang / Fotolia)

The study, published online inComputers in Human Behavior, was conducted by U-M researchers Elliot Panek, Yioryos Nardis and Sara Konrath.

“Among young adult college students, we found that those who scored higher in certain types of narcissism posted more often on Twitter,” said Panek, who recently received his doctorate in communication studies from U-M and will join Drexel University this fall as a visiting fellow.

“But among middle-aged adults from the general population, narcissists posted more frequent status updates on Facebook.”

According to Panek, Facebook serves narcissistic adults as a mirror.

“It’s about curating your own image, how you are seen, and also checking on how others respond to this image,” he said. “Middle-aged adults usually have already formed their social selves, and they use social media to gain approval from those who are already in their social circles.”

For narcissistic college students, the social media tool of choice is the megaphone of Twitter.

“Young people may overevaluate the importance of their own opinions,” Panek said. “Through Twitter, they’re trying to broaden their social circles and broadcast their views about a wide range of topics and issues.”

The researchers examined whether narcissism was related to the amount of daily Facebook and Twitter posting and to the amount of time spent on each social media site, including reading the posts and comments of others.

For one part of the study, the researchers recruited 486 college undergraduates. Three-quarters were female and the median age was 19. Participants answered questions about the extent of their social media use, and also took a personality assessment measuring different aspects of narcissism, including exhibitionism, exploitativeness, superiority, authority and self-sufficiency.

For the second part of the study, the researchers asked 93 adults, mostly white females, with an average age of 35, to complete an online survey.

According to Panek, the study shows that narcissistic college students and their adult counterparts use social media in different ways to boost their egos and control others’ perceptions of them.

“It’s important to analyze how often social media users actually post updates on sites, along with how much time they spend reading the posts and comments of others,” he said.

The researchers were unable to determine whether narcissism leads to increased use of social media, or whether social media use promotes narcissism, or whether some other factors explain the relationship. But the study is among the first to compare the relationship between narcissism and different kinds of social media in different age groups.

Funding for the study comes in part from The Character Project, sponsored by Wake Forest University via the John Templeton Foundation.

Journal Reference:

  1. Elliot T. Panek, Yioryos Nardis, Sara Konrath. Mirror or Megaphone?: How relationships between narcissism and social networking site use differ on Facebook and TwitterComputers in Human Behavior, 2013; 29 (5): 2004 DOI: 10.1016/j.chb.2013.04.012

Climate Researchers Discover New Rhythm for El Niño (Science Daily)

May 27, 2013 — El Niño wreaks havoc across the globe, shifting weather patterns that spawn droughts in some regions and floods in others. The impacts of this tropical Pacific climate phenomenon are well known and documented.

This is a schematic figure for the suggested generation mechanism of the combination tone: The annual cycle (Tone 1), together with the El Niño sea surface temperature anomalies (Tone 2) produce the combination tone. (Credit: Malte Stuecker)

A mystery, however, has remained despite decades of research: Why does El Niño always peak around Christmas and end quickly by February to April?

Now there is an answer: An unusual wind pattern that straddles the equatorial Pacific during strong El Niño events and swings back and forth with a period of 15 months explains El Niño’s close ties to the annual cycle. This finding is reported in the May 26, 2013, online issue of Nature Geoscience by scientists from the University of Hawai’i at Manoa Meteorology Department and International Pacific Research Center.

“This atmospheric pattern peaks in February and triggers some of the well-known El Niño impacts, such as droughts in the Philippines and across Micronesia and heavy rainfall over French Polynesia,” says lead author Malte Stuecker.

When anomalous trade winds shift south they can terminate an El Niño by generating eastward propagating equatorial Kelvin waves that eventually resume upwelling of cold water in the eastern equatorial Pacific. This wind shift is part of the larger, unusual atmospheric pattern accompanying El Niño events, in which a high-pressure system hovers over the Philippines and the major rain band of the South Pacific rapidly shifts equatorward.

With the help of numerical atmospheric models, the scientists discovered that this unusual pattern originates from an interaction between El Niño and the seasonal evolution of temperatures in the western tropical Pacific warm pool.

“Not all El Niño events are accompanied by this unusual wind pattern” notes Malte Stuecker, “but once El Niño conditions reach a certain threshold amplitude during the right time of the year, it is like a jack-in-the-box whose lid pops open.”

A study of the evolution of the anomalous wind pattern in the model reveals a rhythm of about 15 months accompanying strong El Niño events, which is considerably faster than the three- to five-year timetable for El Niño events, but slower than the annual cycle.

“This type of variability is known in physics as a combination tone,” says Fei-Fei Jin, professor of Meteorology and co-author of the study. Combination tones have been known for more than three centuries. They where discovered by violin builder Tartini, who realized that our ear can create a third tone, even though only two tones are played on a violin.

“The unusual wind pattern straddling the equator during an El Niño is such a combination tone between El Niño events and the seasonal march of the sun across the equator” says co-author Axel Timmermann, climate scientist at the International Pacific Research Center and professor at the Department of Oceanography, University of Hawai’i. He adds, “It turns out that many climate models have difficulties creating the correct combination tone, which is likely to impact their ability to simulate and predict El Niño events and their global impacts.”

The scientists are convinced that a better representation of the 15-month tropical Pacific wind pattern in climate models will improve El Niño forecasts. Moreover, they say the latest climate model projections suggest that El Niño events will be accompanied more often by this combination tone wind pattern, which will also change the characteristics of future El Niño rainfall patterns.

Journal Reference:

  1. Malte F. Stuecker, Axel Timmermann, Fei-Fei Jin, Shayne McGregor, Hong-Li Ren. A combination mode of the annual cycle and the El Niño/Southern Oscillation.Nature Geoscience, 2013; DOI: 10.1038/ngeo1826

Global Warming Caused by CFCs, Not Carbon Dioxide, Researcher Claims in Controversial Study (Science Daily)

May 30, 2013 — Chlorofluorocarbons (CFCs) are to blame for global warming since the 1970s and not carbon dioxide, according to a researcher from the University of Waterloo in a controversial new study published in the International Journal of Modern Physics B this week.

Annual global temperature over land and ocean. (Credit: Image by Q.-B. Lu)

CFCs are already known to deplete ozone, but in-depth statistical analysis now suggests that CFCs are also the key driver in global climate change, rather than carbon dioxide (CO2) emissions, the researcher argues.

“Conventional thinking says that the emission of human-made non-CFC gases such as carbon dioxide has mainly contributed to global warming. But we have observed data going back to the Industrial Revolution that convincingly shows that conventional understanding is wrong,” said Qing-Bin Lu, a professor of physics and astronomy, biology and chemistry in Waterloo’s Faculty of Science. “In fact, the data shows that CFCs conspiring with cosmic rays caused both the polar ozone hole and global warming.”

“Most conventional theories expect that global temperatures will continue to increase as CO2 levels continue to rise, as they have done since 1850. What’s striking is that since 2002, global temperatures have actually declined — matching a decline in CFCs in the atmosphere,” Professor Lu said. “My calculations of CFC greenhouse effect show that there was global warming by about 0.6 °C from 1950 to 2002, but the earth has actually cooled since 2002. The cooling trend is set to continue for the next 50-70 years as the amount of CFCs in the atmosphere continues to decline.”

The findings are based on in-depth statistical analyses of observed data from 1850 up to the present time, Professor Lu’s cosmic-ray-driven electron-reaction (CRE) theory of ozone depletion and his previous research into Antarctic ozone depletion and global surface temperatures.

“It was generally accepted for more than two decades that the Earth’s ozone layer was depleted by the sun’s ultraviolet light-induced destruction of CFCs in the atmosphere,” he said. “But in contrast, CRE theory says cosmic rays — energy particles originating in space — play the dominant role in breaking down ozone-depleting molecules and then ozone.”

Lu’s theory has been confirmed by ongoing observations of cosmic ray, CFC, ozone and stratospheric temperature data over several 11-year solar cycles. “CRE is the only theory that provides us with an excellent reproduction of 11-year cyclic variations of both polar ozone loss and stratospheric cooling,” said Professor Lu. “After removing the natural cosmic-ray effect, my new paper shows a pronounced recovery by ~20% of the Antarctic ozone hole, consistent with the decline of CFCs in the polar stratosphere.”

By demonstrating the link between CFCs, ozone depletion and temperature changes in the Antarctic, Professor Lu was able to draw almost perfect correlation between rising global surface temperatures and CFCs in the atmosphere.

“The climate in the Antarctic stratosphere has been completely controlled by CFCs and cosmic rays, with no CO2 impact. The change in global surface temperature after the removal of the solar effect has shown zero correlation with CO2 but a nearly perfect linear correlation with CFCs — a correlation coefficient as high as 0.97.”

Data recorded from 1850 to 1970, before any significant CFC emissions, show that CO2 levels increased significantly as a result of the Industrial Revolution, but the global temperature, excluding the solar effect, kept nearly constant. The conventional warming model of CO2, suggests the temperatures should have risen by 0.6°C over the same period, similar to the period of 1970-2002.

The analyses support Lu’s CRE theory and point to the success of the Montreal Protocol on Substances that Deplete the Ozone Layer.

“We’ve known for some time that CFCs have a really damaging effect on our atmosphere and we’ve taken measures to reduce their emissions,” Professor Lu said. “We now know that international efforts such as the Montreal Protocol have also had a profound effect on global warming but they must be placed on firmer scientific ground.”

“This study underlines the importance of understanding the basic science underlying ozone depletion and global climate change,” said Terry McMahon, dean of the faculty of science. “This research is of particular importance not only to the research community, but to policy makers and the public alike as we look to the future of our climate.”

Professor Lu’s paper, “Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change,” also predicts that the global sea level will continue to rise for some years as the hole in the ozone recovers increasing ice melting in the polar regions.

“Only when the effect of the global temperature recovery dominates over that of the polar ozone hole recovery, will both temperature and polar ice melting drop concurrently,” says Lu.

The peer-reviewed paper published this week not only provides new fundamental understanding of the ozone hole and global climate change but has superior predictive capabilities, compared with the conventional sunlight-driven ozone-depleting and CO2-warming models, Lu argues.

Journal Reference:

  1. Q.-B. Lu. Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate ChangeInternational Journal of Modern Physics B, 2013; 1350073 DOI: 10.1142/S0217979213500732

Cientistas desenvolvem simulador de mídias sociais (Fapesp)

Criado por pesquisadores da IBM e do Instituto de Matemática e Estatística da USP, sistema possibilitará prever o impacto de ações de comunicação em redes como Twitter e Facebook

04/06/2013

Elton Alisson

Agência FAPESP – O poder de difusão e a velocidade de propagação das informações nas mídias sociais têm despertado o interesse de empresas e organizações em realizar ações de comunicação em plataformas como Twitter e Facebook.

Um dos desafios com os quais se deparam ao tomar essa decisão, no entanto, é prever o impacto que as campanhas terão nessas mídias sociais, uma vez que elas apresentam um efeito altamente “viral” – as informações se propagam nelas muito rapidamente e é difícil estimar a repercussão que terão.

“Se antes uma pessoa divulgava uma informação no boca-a-boca para mais três ou quatro pessoas, agora ela possui uma audiência que pode chegar aos milhares de seguidores por meio da internet. Daí a dificuldade de prever o impacto de uma ação em uma mídia social”, disse Claudio Pinhanez, líder do grupo de pesquisa em sistemas de serviços da IBM Research – Brazil – o laboratório brasileiro de pesquisa da empresa norte-americana de tecnologia da informação – à Agência FAPESP.

Para tentar encontrar uma resposta a esse desafio, o grupo iniciou um projeto em parceria com pesquisadores do Departamento de Computação do Instituto de Matemática e Estatística (IME) da Universidade de São Paulo (USP) a fim de desenvolver um simulador capaz de prever o impacto das ações de comunicação em mídias sociais com base nos padrões de comportamento dos usuários.

Os primeiros resultados do projeto foram apresentados no início de maio durante o 14th International Workshop on Multi-Agent-Based Simulation, realizado na cidade de Saint Paul, no estado de Minnesota, nos Estados Unidos e, posteriormente, no Latin American eScience Workshop 2013, que ocorreu nos dias 14 e 15 de maio no Espaço Apas, em São Paulo.

Promovido pela FAPESP e pela Microsoft Research, o segundo evento reuniu pesquisadores e estudantes da Europa, da América do Sul e do Norte, da Ásia e da Oceania para discutir avanços em diversas áreas do conhecimento possibilitados pela melhoria na capacidade de análise de grandes volumes de informações produzidas por projetos de pesquisa.

Segundo Pinhanez, para desenvolver um método inicial para modelar e simular as interações entre os usuários de redes sociais, foram coletadas mensagens publicadas por 25 mil pessoas nas redes no Twitter do presidente dos Estados Unidos, Barack Obama, e de seu adversário político, Mitt Romney, em outubro de 2012, último mês da recente campanha eleitoral presidencial norte-americana.

Os pesquisadores analisaram o conteúdo das mensagens e o comportamento dos usuários nas redes de Obama e Romney, de modo a identificar padrões de ações, a frequência com que postavam mensagens, se eram mais positivas ou negativas e qual a influência dessas mensagens sobre outros usuários.

Com base nesse conjunto de dados, desenvolveram um modelo de simulação de agentes – um sistema por meio do qual cada usuário avaliado é representado por programas individuais de computador que rodam integrados e ao mesmo tempo – que indica as probabilidades de ação na rede de cada uma dessas pessoas, apontando qual o momento do dia mais provável para publicar uma mensagem positiva ou negativa com base em seu histórico de comportamento.

Uma das constatações nos experimentos com o simulador foi que a retirada dos dez usuários mais engajados nas discussões realizadas no Twitter do presidente teria mais impacto na rede social do que se o próprio Obama fosse excluído.

“Esses resultados são preliminares e ainda não temos como dizer que são válidos, porque o modelo ainda é inicial e muito simples. Servem, contudo, para demonstrar que o modelo é capaz de mostrar situações interessantes e que, quando estiver pronto, será muito útil para testar hipóteses e responder a perguntas do tipo ‘será que a frequência com que o presidente Obama publica uma mensagem afeta sua rede social?’”, disse Pinhanez.

A IBM já possuía um sistema que permite a análise de “sentimento” – como é denominada a classificação do tom de uma mensagem – de grandes volumes de textos em inglês e em fluxo contínuo (em tempo real de informação), que a empresa pretende aprimorar para disponibilizá-la no Brasil.

“Estamos trabalhando para trazer uma série de tecnologias e adaptá-las para a língua portuguesa e à cultura brasileira, uma vez que o Brasil é o segundo país mais engajado em redes sociais no mundo, atrás apenas dos Estados Unidos”, afirmou Pinhanez.

Desafios

Segundo os pesquisadores, um dos principais desafios para a análise de sentimento de mensagens publicadas nas redes sociais no Brasil é que o português usado nessas novas mídias costuma não seguir as normas cultas da língua portuguesa, e isso não se deve, necessariamente, ao fato de o usuário não dominar o idioma.

“Existem convenções de como se escrever de maneira cool nas redes sociais”, disse Pinhanez. Por causa disso, um dos desafios no Brasil será o de incorporar o novo vocabulário surgido nesses fóruns.

Além disso, os textos são mais curtos e informais do que os publicados em sites de avaliações de filmes, por exemplo, como o do Internet Movie Database, em que os comentários são mais longos, mais bem formatados e rotulados.

“Com base nesse tipo de critério, podemos saber, de antemão, qual o sentimento do texto: se o usuário deu muitas estrelas para o filme é que ele está falando bem. E se deu poucas estrelas é porque sua avaliação foi negativa”, disse Samuel Martins Barbosa Neto, doutorando do IME e participante do projeto.

“A linguagem usada no Twitter é muito mais natural. Há muita expressão e variações de palavras, o que torna muito mais complicada a classificação das mensagens. Às vezes não se tem informação suficiente para assegurar que, de fato, um determinado tweet é positivo ou negativo, uma vez que ele não tem um rótulo que permita compará-lo com outros. Por isso, muitas dessas mensagens precisam ser rotuladas manualmente”, explicou Barbosa Neto.

Outro desafio é extrair dados das redes sociais. No início, o acesso aos dados das mensagens de redes, como o Twitter, era totalmente aberto. Hoje, é limitado. Além disso, o número de informações geradas por redes sociais cresceu exponencialmente, impondo aos pesquisadores o desafio de extrair mostras significativas de grandes volumes de dados para validar suas pesquisas.

“A rede do Obama no Twitter deve ter chegado aos 25 milhões de seguidores. Como podemos apenas extrair uma pequena parte desses dados, o desafio é garantir que eles não sejam enviesados – representando, por exemplo, apenas um nicho de seguidores – para gerar um resultado válido”, explicou Barbosa Neto.

Colaboração de pesquisa

Roberto Marcondes Cesar Junior, professor do IME-USP e orientador do trabalho de doutorado de Barbosa Neto, conta que o projeto de desenvolvimento do simulador de rede social é o primeiro realizado por seu grupo em colaboração com a IBM Research – Brazil.

O grupo do IME trabalha há dez anos no desenvolvimento de projetos de análise de dados usando modelos estatísticos em áreas como Biologia e Medicina, para descobertas de novos genes e de redes gênicas, por exemplo. E, mais recentemente, começou a desenvolver pesquisas para a aplicação de modelos matemáticos em Ciências Sociais.

“Ingressamos nessa área com o intuito de aplicar as mesmas técnicas matemáticas e computacionais em situações em que os dados provêm de alguma atividade humana, especificamente, em vez da ação de um gene ou de uma proteína, por exemplo, e vimos a oportunidade de trabalhar essas técnicas em redes sociais, que, do ponto de vista abstrato, têm muitas semelhanças com uma rede gênica, porque são redes que conectam elementos”, comparou Marcondes Cesar, que é membro da Coordenação Adjunta de Ciências Exatas e Engenharias da FAPESP e coordena o Projeto Temático “Modelos e métodos de e-Science para ciências da vida e agrárias”.

“Enquanto em uma rede gênica os elementos são os genes, que trocam informação bioquímica, em uma rede social os integrantes são os usuários, que trocam mensagens de texto”, disse.

A parceria com a IBM Research – Brazil, segundo Marcondes Cesar, possibilita implementar as ferramentas desenvolvidas na universidade. Para facilitar a realização do projeto, o estudante de doutorado orientado por ele foi contratado como estagiário pela empresa.

“Temos feito muitos projetos em parceria com universidades e instituições de pesquisa. Acreditamos muito em inovação aberta e atuamos bastante dessa forma”, disse Pinhanez.

Segundo Pinhanez, poucos grupos de pesquisa no mundo tentaram desenvolver um simulador de mídias sociais, em grande parte pela dificuldade de se montar uma equipe multidisciplinar de pesquisa.

“Acho que, pela primeira vez, a comunidade científica tem algo parecido com o mapa de quem conhece quem no mundo. É um mapa ainda incompleto, cheio de erros e enviesado, mas o nosso trabalho é uma das primeiras simulações de comportamento de um número tão grande de pessoas”, afirmou. “Antes, quando se fazia isso era, no máximo, com 300 pessoas, e era preciso ficar coletando dados por anos.”

O artigo Large-Scale Multi-Agent-based Modeling and Simulation of Microblogging-based Online Social Network, de Pinhanez e outros, pode ser lido nos anais do 14th International Workshop on Multi-Agent-Based Simulation.

Subcommittee Reviews Legislation to Improve Weather Forecasting (Subcommittee on Environmen, House of Representatives, USA)

MAY 23, 2013

Washington, D.C. – The Subcommittee on Environment today held a hearing to examine ways to improve weather forecasting at the National Oceanic and Atmospheric Administration (NOAA).  Witnesses provided testimony on draft legislation that would prioritize weather-related research at NOAA, in accordance with its critical mission to protect lives and property through enhanced weather forecasting. The hearing was timely given the recent severe tornadoes in the mid-west and super-storms like Hurricane Sandy.

Environment Subcommittee Chairman Chris Stewart (R-Utah): “We need a world-class system of weather prediction in the United States – one, as the National Academy of Sciences recently put it, that is ‘second to none.’ We can thank the hard-working men and women at  NOAA and their partners throughout the weather enterprise for the great strides that have been made in forecasting in recent decades.  But we can do better. And it’s not enough to blame failures on programming or sequestration or lack of other resources. As the events in Moore, Oklahoma have demonstrated, we have to do better. But the good news is that we can.”

Experts within the weather community have raised concern that the U.S. models for weather prediction have fallen behind Europe and other parts of the world in predicting weather events.The Weather Forecasting Improvement Act, draft legislation discussed at today’s hearing, would build upon the down payment made by Congress following Hurricane Sandy and restore the U.S. as a leader in this field through expanded computing capacity and data assimilation techniques.

Rep. Stewart: “The people of Moore, Oklahoma received a tornado warning 16 minutes before the twister struck their town. Tornado forecasting is difficult but lead times for storms have become gradually better. The draft legislation would prioritize investments in technology being developed at NOAA’s National Severe Storms Laboratory in Oklahoma, which ‘has the potential to provide revolutionary improvements in… tornado… warning lead times and accuracy, reducing false alarms’ and could move us toward the goal of being able to ‘warn on forecast.’”

The following witnesses testified today:

Mr. Barry Myers, Chief Executive Officer, AccuWeather, Inc.

Mr. Jon Kirchner, President, GeoOptics, Inc.

Geoengineering: Can We Save the Planet by Messing with Nature? (Democracy Now!)

Video: http://www.democracynow.org/2013/5/20/geoengineering_can_we_save_the_planet

Clive Hamilton, professor of public ethics at Charles Sturt University in Canberra, Australia. He is the author of the new book, Earthmasters: The Dawn of the Age of Climate Engineering.

“eScience revoluciona a forma como se faz ciência” (Fapesp)

Novas ferramentas de computação possibilitam fazer ciência de forma melhor, mais rápida e com maior impacto, diz Tony Hey, vice-presidente da Microsoft Research (foto:E.Cesar/FAPESP)

16/05/2013

Por Elton Alisson

Agência FAPESP – Um software de visualização de dados astronômicos pela internet permite que cientistas em diversas partes do mundo acessem milhares de imagens de objetos celestes, coletadas por grandes telescópios espaciais, por observatórios e por instituições internacionais de pesquisa em astronomia.

Por meio desses dados, os usuários podem realizar análises temporais e combinar observações realizadas em vários comprimentos de onda de energia irradiada pelos corpos celestes, como raios X, radiação infravermelha, ultravioleta e gama e ondas de rádio, para elucidar os processos físicos que ocorrem no interior desses objetos e compartilhar suas conclusões.

Denominado World Wide Telescope, o software, que começou a ser desenvolvido em 2002 pela Microsoft Research, em parceria com pesquisadores da Universidade Johns Hopkins, nos Estados Unidos, é um exemplo de como as novas tecnologias da informação e comunicação (TICs) mudaram a forma como os dados científicos passaram a ser gerados, administrados e compartilhados, além da própria maneira como se faz ciência hoje, afirma Tony Hey, vice-presidente da Microsoft Research.

“Os telescópios espaciais, assim como as máquinas de sequenciamento genético e aceleradores de partículas, estão gerando um volume de dados até então nunca visto. Para lidar com esse fenômeno e possibilitar que os cientistas possam manipular e compartilhar esses dados, precisamos de uma série de tecnologias e ferramentas de ciência da computação que possibilitem fazer ciência de forma melhor, mais rápida e com maior impacto. É isso o que chamamos deeScience”, disse Hey durante o Latin American eScience Workshop 2013, realizado nos dias 14 e 15 de maio no Espaço Apas, em São Paulo.

Promovido pela FAPESP e pela Microsoft Research, o evento reuniu pesquisadores e estudantes da Europa, da América do Sul e do Norte, da Ásia e da Oceania para discutir avanços em diversas áreas do conhecimento possibilitados pela melhoria na capacidade de análise de grandes volumes de informações produzidas por projetos de pesquisa.

A cerimônia de abertura do evento foi presidida por Celso Lafer, presidente da FAPESP, e contou com a presença de Michel Levy, presidente da Microsoft Brasil, e de José Tadeu de Faria, superintendente do Ministério da Agricultura, Pecuária e Abastecimento no Estado de São Paulo, representando o ministro.

Também conhecida como ciência orientada por dados, a área de eScience integra pesquisas em computação a estudos nas mais variadas áreas por meio do desenvolvimento de softwares específicos para visualização e análise de informações.

A integração permite a interpretação dos dados, a formulação de teorias, testes por simulação e o levantamento de novas hipóteses de pesquisa com base em correlações difíceis de serem observadas sem o apoio da tecnologia da informação.

“Algumas tecnologias utilizadas na ciência da computação vão ajudar a resolver problemas científicos. Em contrapartida, a utilização dessas ferramentas para solucionar problemas científicos também possibilitará o próprio desenvolvimento da ciência da computação”, disse Hey, que foi professor da Universidade de Southampton, no Reino Unido.

Segundo Hey, a análise, visualização, prospecção (data mining, na expressão em inglês), preservação e compartilhamento de grandes volumes de dados representam grandes desafios não só na ciência hoje, mas também no setor privado.

Por isso, na opinião dele, é preciso treinar os cientistas para lidar com o big data – como é chamado o conjunto de soluções tecnológicas capaz de lidar com a acumulação contínua de dados pouco estruturados, capturados de diversas fontes e  da ordem de petabytes (quatrilhões de bytes) – tanto para realização de projetos científicos, como também para atuarem, eventualmente, em empresas. “O data scientist [cientista capaz de lidar com grandes volumes de dados] será um requisito imprescindível para o cientista”, disse Hey.

A ciência intensiva em dados não é nova, mas as escalas espaciais e temporais de estudos realizados atualmente sobre temas relacionados às mudanças climáticas globais, por exemplo, são cada vez maiores, exigindo novas ferramentas. Por meio de novas tecnologias da informação, também é possível analisar dados gerados em tempo real, como no monitoramento de hábitats.

De acordo com Hey, desde 1950 se começou a utilizar computadores para explorar, por meio de simulações, áreas da ciência até então inacessíveis. “No início, no entanto, os cientistas não sabiam o que era ciência da computação e os profissionais da computação não entendiam a complexidade dos problemas científicos”, disse.

“Foi necessária a realização de um trabalho conjunto, de longo prazo, para que os dois lados entendessem qual era a contribuição que cada um poderia dar em suas respectivas áreas, e iniciar o desenvolvimento de novos algoritmos, hardwaresoftware e da programação de linguagens para possibilitar a realização de experimentos em diversas áreas”, contou.

Oportunidades em temas ousados

Durante o evento da FAPESP e da Microsoft Research foram apresentados diversos projetos por pesquisadores que utilizam o eScience em diversos países, em áreas como energias renováveis, mudanças climáticas globais, transformações sociais, econômicas e políticas nas metrópoles contemporâneas, caracterização, conservação, recuperação e uso sustentável da biodiversidade, medicina e saúde pública.

Um desses projetos, coordenado pela professora Glaucia Mendes Souza, coordenadora do Programa FAPESP de Pesquisa em Bioenergia (BIOEN), pretende desenvolver um algoritmo para o sequenciamento do genoma da cana-de-açúcar e, com isso, possibilitar o desenvolvimento de variedades da planta com maior quantidade de sacarose e mais resistente a pragas e às mudanças climáticas.

“A colaboração entre a FAPESP e a Microsoft tem aberto para a comunidade científica do Estado de São Paulo inúmeras oportunidades de realizar pesquisas em temas ousados relacionados com o uso de tecnologias da informação em áreas como a de energia e meio ambiente”, disse Carlos Henrique de Brito Cruz, diretor científico da FAPESP, na sessão de abertura do workshop.

“Temos grandes expectativas em relação à eScience. Se soubermos utilizá-la adequadamente, ela poderá trazer grandes avanços não só em pesquisas mas também na própria maneira de se fazer ciência”, disse Brito Cruz.

Ele disse que a FAPESP planeja lançar em breve um programa voltado para apoiar pesquisas na área de eScience.

“Temos a clara convicção de que um papel importante da FAPESP é estar na vanguarda da inovação e do conhecimento, e consideramos muito importante o apoio à pesquisas em eScience, cuja aplicação em áreas como a de meio ambiente é inequívoca, mas que também apresenta um grande potencial de utilização nas Ciências Humanas, por exemplo”, disse Celso Lafer, presidente da FAPESP.

Levy destacou a parceria da Microsoft com a FAPESP e os investimentos em pesquisa e desenvolvimento realizados pela empresa no país. “A Microsoft tem aumentado seus investimentos na área de pesquisa e desenvolvimento no Brasil nos últimos anos e um dos mais importantes exemplos disso é a parceria bem sucedida que mantemos com a FAPESP”, afirmou.

Clouds in the Head: New Model of Brain’s Thought Processes (Science Daily)

May 21, 2013 — A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.

A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. (Credit: iStockphoto/Sebastian Kaulitzki)

Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing inNature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.

What goes on in the heads of apes

The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.

Chaotic patterns revealed in context

Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.

Journal Reference:

  1. Mattia Rigotti, Omri Barak, Melissa R. Warden, Xiao-Jing Wang, Nathaniel D. Daw, Earl K. Miller, Stefano Fusi. The importance of mixed selectivity in complex cognitive tasksNature, 2013; DOI: 10.1038/nature12160