Arquivo da categoria: Uncategorized

Folhinha de Mariana (Arquidiocese de Mariana)

FOLHINHA ECLESIÁSTICA DE MARIANA
s/d, acessado em 12 de setembro de 2014

Côn. José Geraldo Vidigal de Carvalho*

Publica-se em Mariana desde 1870, portanto há 136 anos, a tradicional “Folhinha Eclesiástica de Mariana”, fundada por D. Silvério para ser um sucedâneo aos calendários, por vezes, uns tanto licenciosos. Ela foi precedida em 1830 pela “Folhinha de Rezas do Bispado de Mariana” que apresentava preces e informações de utilidade pública.

Famosa pelo Regulamento do tempo a folhinha de Mariana que se firmou, no decorrer dos anos, como infalível, tem uma tiragem de cerca de trezentos mil exemplares. É conhecida em todo o Estado e em outras regiões do País.

Em 1959, o então Arcebispo de Mariana, D. Oscar de Oliveira adquiriu os direitos autorais de Agripino Claudino dos Santos e, em 1965, os da similar Folhinha Civil e Eclesiástica do Arcebispado de Mariana, editada pela Tipografia e Livraria Moraes, passando a imprimi-la a Editora Dom Viçoso, que possui o Lunário Perpétuo para os cálculos anuais.

Estes são feitos em torno do ano lunar, cujo início se fez coincidir com lunação que começa em Dezembro. Cada lunação tem a duração exata de 19 dias, 12 horas e 44 minutos. De dezenove em dezenove anos se repetem os fenômenos causados pela influência lunar.

O Lunário Perpétuo oferece as regras para se poder calcular as variações do tempo, conforme registra o referido Regulamento estampado na Folhinha. É claro que tais previsões valem para o contexto geográfico assinalado no referido Lunário Perpétuo.

De 1960 a 1994 fomos o diretor desta Folhinha e nestes 34 anos impressionante a correspondência exaltando a fidelidade deste Calendário em acertar a previsão do tempo. Inúmeros os jornais que publicaram reportagens sobre o mesmo sempre ressaltando este pormenor. É claro que em torno da Folhinha de Mariana se criaram algumas lendas, mas que, no fundo, servem para afirmar o seu alto conceito popular.

Assim que junto do povo por vezes se diz que “é mais fácil em galinha nascer dente do que a folhinha de Mariana falhar!” Conta-se também que alguém telefonou para um amigo de uma cidade vizinha, dizendo-se decepcionado porque a Folhinha de Mariana marcava chuva e nada de chuva. A resposta foi imediata: “Você não perde por esperar!” Pouco depois uma tempestade confirmava lá a previsão “tempo revolto”, repreendendo a dúvida daquele Tomé!

O escritor Carlos Drumonnd de Andrade assim se expressou sobre este calendário em crônica publicada no Jornal do Brasil, dia 27 de Dezembro de 1973, à página 5 do primeiro caderno, sob a epígrafe A Boa Folhinha: “Ela não quer iludir-nos com as pompas deste mundo. Adverte-nos que há dias de penitência, esta última comutada em obras de caridade e exercícios piedosos.

Para cada dia do ano, o santo, a santa ou os santos que nos convém aceitar, como companheiros de jornada: breve companhia, companhia sempre variada, e o ano escoam sob luz tranqüila, mesmo que o tempo seja brusco e haja abundância de água”. Termina o renomado escritor com este conselho: “Vamos à boa, veraz, singela e insubstituível Folhinha de Mariana”.

Esse calendário apresenta orações, instruções religiosas, tabela do amanhecer e do anoitecer, das festas móveis, dos feriados, época de plantio, resoluções da CNBB, dados biográficos do Papa, além de reservar um espaço 11×15 para a propaganda das casas comerciais que distribuem aos fregueses como brinde de fim de ano.

Ao redigir estas linhas estamos com um exemplar deste calendário do ano 2000, enviado por uma Farmácia que “oferece muito mais segurança para sua saúde e garantia de bom atendimento!”.

*Ex-Diretor da Folhinha de Mariana (1960-1994)

When their research has social implications, how should climate scientists get involved? (The Guardian)

Scientists prefer to stick to research, but sometimes further involvement is warranted

Thursday 4 September 2014 14.00 BST

Laboratory technician in a lab; the natural habitat of scientists.Laboratory technician in a lab; the natural habitat of scientists. Photograph: David Burton/Alamy

First, at the end of this post is a question to my readers wherein I ask for feedback. So, please read to the end.

Most scientists go into their studies because they want to understand the world. They want to know why things happen; also how to describe phenomena, both mathematically and logically. But, as scientists carry out their research, often their findings have large social implications. What do they do when that happens?

Well traditionally, scientists just “stick to the facts” and report. They try to avoid making recommendations, policy or otherwise, that are relevant to the findings. But, as we see the social implications of various issues grow larger (environmental, energy, medical, etc.) it becomes harder for scientists to sit out in more public discussions about what should be done. In fact, researchers who have a clear handle on the issue and the pros and cons of different choices have very valuable perspectives to provide society.

But what does involvement look like? For some scientists, it may be helping reporters gather information for stories that may appear online, in print, radio, or television. In another manifestation, it might be writing for themselves (like my blog here at the Guardian). Others may write books, meet with legislators, or partake in public demonstrations.

Each of these levels of engagement has professional risks. We scientists need to protect our professional reputations. That reputation requires that we are completely objective in our science. As a scientist becomes more engaged in advocacy, they risk being viewed by their colleagues as non-objective in their science.

Of course, this isn’t true. It is possible (and easy) to convey the science but also convey the importance of taking action. I do this on a daily basis. But I will go further here. It is essential for scientists to speak out. With the necessary expertise to make informed decisions, it is out obligation to society. Of course, each scientist has to decide how to become engaged. We don’t get many kudos for engagement, it takes time and money out of our research, you will never get tenured for having a more public presence, and you will likely receive po)rly-writen hate mail – but it still is needed for informed decision making.

One very public activity some scientists engage in is public events and demonstrations. A large such event is going to occur this September in New York (September 21 – the Peoples’ Climate March). Just a few days before the UN Climate Summit, the Climate March hopes to bring thousands of people from faith, business, health, agriculture, and science communities together. Scientists will certainly be there – and those scientists should be lauded. I am encouraging my colleagues to participate in events like this.

Okay so now the poll (sort of). I have been writing this blog for over a year – something like 60 posts. Approximately half those posts are on actual science, breaking new studies that shed light on our ever expanding understanding of the Earth’s climate. Another sizeable number of posts are on reviews of books, movies, projects, and others. A third category deals with how climate impacts different locations around the globe. In this group, I’ve written about climate change in Uganda, Kenya, and Cameroon – climate change effects that I’ve witnessed with my own eyes. A fourth category that I just started focuses on specific scientists telling how they got into climate change. Finally, I write a few posts on debunking bad science and misguided public statements.

In truth, I prefer the harder science, but frankly these do not get as many page views as the debunking posts. I am here asking for suggested topics of future posts. I have a few in queue but I always look for engaging topics and angles. You can send them to me at my university email address,jpabraham@stthomas.edu. Also, feel free to comment on the current mix of stories. Is 50% hard science the right mix? Is it too much? Too little? Is my writing to technical? Not technical enough? Let me hear your thoughts.

Can Carbon Capture Technology Be Part of the Climate Solution? (Environment 360)

08 SEP 2014

Some scientists and analysts are touting carbon capture and storage as a necessary tool for avoiding catastrophic climate change. But critics of the technology regard it as simply another way of perpetuating a reliance on fossil fuels.

by david biello

For more than 40 years, companies have been drilling for carbon dioxide in southwestern Colorado. Time and geology had conspired to trap an enormous bubble of CO2 that drillers tapped, and a pipeline was built to carry the greenhouse gas all the way to the oil fields of west Texas. When scoured with the CO2, these aged wells gush forth more oil, and much of the CO2 stays permanently trapped in its new home underneath Texas.

More recently, drillers have tapped the Jackson Dome, nearly three miles beneath Jackson, Mississippi, to get at a trapped pocket of CO2 for similar

Kemper County power plant near Meridian, Mississippi

Gary Tramontina/Bloomberg/Getty Images. This power plant being built in Kemper County, Mississippi, would be the first in the U.S. to capture its own carbon emissions.

use. It’s called enhanced oil recovery. And now there’s a new source of CO2 coming online in Mississippi — a power plant that burns gasified coal in Kemper County, due to be churning out electricity and captured CO2 by 2015 and sending it via a 60-mile pipeline to oil fields in the southern part of the state.

The Mississippi project uses emissions from burning a fossil fuel to help bring more fossil fuels out of the ground — a less than ideal solution to the problem of climate change. But enhanced oil recovery may prove an important step in making more widely available a technology that could be critical for combating climate change — CO2 capture and storage, or CCS.

As the use of coal continues to grow globally — coal consumption is expected to double from 2000 to 2020 largely due to demand in China and India — some scientists believe the widespread adoption of CCS technology could be key to any hope of limiting global average temperature increase to 2 degrees Celsius, the threshold for avoiding major climate disruption. After all, coal is the dirtiest fossil fuel.

“Fossil fuels aren’t disappearing anytime soon,” says John Thompson, director of the Fossil Fuel Transition Project for the non-profit Clean Air Task Force. “If we’re serious about preventing global warming, we’re going to have to find a way to use those fuels without the carbon going into the atmosphere. It seems inconceivable that we can do that without a significant amount of carbon capture and storage. The question is how do we deploy it in time and in a way that’s cost-effective across many nations?”

The biggest challenge is one of scale, as the potential demand from aging oil fields for CO2 produced from coal-fired power plants is enormous. Thompson estimates that enhanced oil recovery could ultimately consume 33 billion metric tons of CO2 in total, or the equivalent of all the CO2 pollution from all U.S. power plants for several decades. Thompson and other analysts view such large-scale enhanced oil recovery as an important phase in the deployment of CCS technology while replacements for fossil fuels are developed. 

“In the short term, in order to develop the technology, we probably will enable more use of hydrocarbons, which makes environmentally conscious people uncomfortable,” says Chris Jones, a chemical engineer working on CO2 capture at the Georgia Institute of Technology. “But it’s a necessary thing we have to do to get the technology out there and learn how to make it more efficient.”

At the same time, CO2 capture and storage is not as simple as locking away carbon deep underground. As Jones notes, the process will perpetuate fossil fuel use and may prove a wash as far as keeping global warming pollution out of the atmosphere. Then there are the risks of human-caused earthquakes as a result of pumping high-pressure liquids underground or accidental releases as all that CO2 finds its way back to the atmosphere.

“Any solution that doesn’t take carbon from the air is, in principle, not sustainable,” says physicist Peter Eisenberger of the Lamont-Doherty Earth Observatory at Columbia University, who is working on methods to pull CO2 out of the sky rather than smokestacks. He notes that merely avoiding CO2 pollution is not enough and will create political powerhouses—heirs to the energy companies of today—that will entrench such unsustainable technologies “Why spend so much time and energy and ingenuity coming up with solutions that are not really solutions?” he adds.

But the expansion of enhanced oil recovery remains the main front in an intensifying effort to more broadly adopt CCS technology and reduce its price, which is currently the major impediment to its deployment. The need for CO2 storage goes beyond China and the U.S., the world’s two largest polluters. Worldwide, more than 35 billion metric tons of CO2 are being dumped into the atmosphere annually, almost all from the burning of coal, oil, and natural gas. To restrain global warming to the 2 degree C target, more than 100 CCS projects eliminating 270 million metric tons of CO2 pollution annually would have to be built by 2020, according to theInternational Energy Agency. But only 60 are currently planned or proposed and just 21 of those are actually built or in operation.

Those include the Kemper facility and other coal-fired power plants, but also a CCS project under construction at an ethanol refinery in Illinois. A group led by Royal Dutch Shell is building technology to capture the CO2 pollution from tar sands operations in Alberta, Canada, and in Saskatchewan, a $1.2 billion project to retrofit a large coal-fired power plant with CCS technology is expected to open later this year. And there are 34 proposed or operating CCS projects outside of North America, the majority in Asia and Australia. But European countries like Germany have rolled back plans to adopt CCS because of public opposition, dropping the number of European projects from 14 planned in 2011 to just five as of 2014, according to the Global CCS Institute. 

That might conflict with the European Union’s avowed intention to help combat climate change. The U.N. Intergovernmental Panel on Climate Change suggested earlier this year that carbon capture and storage at power plants could prove a critical part of any serious effort to restrain global warming. “We depend on removing large amounts of CO2 from the atmosphere in order to bring concentrations well below 450 [parts-per-million] in 2100,” said Ottmar Edenhofer, an economist at the Potsdam Institute for Climate Impact Research and co-chair of the IPCC’s third working group, which was tasked with figuring out ways to mitigate climate change. Ultimately, he said, keeping a global temperature rise to 2 degrees without any CCS would require phasing out fossil fuels entirely within “the next few decades.”

Yet, from 2007 to 2013, global coal consumption increased from 6.4 billion to 7.4 billion metric tons, and coal use continues to rise. Although renewable energy sources like solar and wind are growing rapidly, they are doing so from a very small base and many energy analysts argue that it will be decades before they can supplant fossil fuels. The time and expense of building nuclear power plants — and public opposition — has also hampered that low-carbon technology’s ability to replace coal burning. And biofuels or electric cars remain a long way from supplanting oil for transportation.

The Obama administration hopes to encourage the development of CO2 capture and use or storage. New rules from the Environmental Protection Agency requiring a 30 percent cut in power plant emissions by 2030 may spur development of CCS technologies. Already, NRG Energy has partnered with a Japanese firm to add CO2 capture to a coal-fired power plant near Houston and use a pipeline to send the captured pollution to nearby oilfields. Dubbed Petra Nova, the $1 billion CCS project is the latest in a series of 19 CO2 capture projects underway or proposed in the U.S. 

The bulk of such CO2 capture and storage experiments may soon shift to China, the world’s largest emitter of CO2. The Chinese and U.S. governments have a cooperative agreement to develop the technology, including partnerships between Chinese power companies like Huaneng and American corporations such as Summit Power, which is developing a CCS power plant in west Texas. In China, the long-awaited GreenGen power plant in Tianjin is still under construction and will capture CO2 for China’s own efforts at enhanced oil recovery. But going forward, the expense of CCS may make the technology even more unpalatable in a developing country like China, which also has plans to turn coal into liquid fuels — a process that, from a climate perspective, is even worse than burning the dirty rock directly.

The technology to capture CO2 is relatively simple, and has been in use since the 1930s. For example, CO2 can be captured from the smokestacks of coal plants, natural gas plants, and even factories by routing the flue gases through an amine chemical bath, which binds the CO2. The chemical is then heated to release the CO2. The CO2 is pressurized to convert it to a liquid, and the liquid is then pumped via pipeline to an appropriate storage site. Those include underground geological formations, such as sandstones or saline aquifers, but also old oil fields, where the CO2 replaces the oil in small pores in the rock left behind by conventional methods and forces it up to the surface. Six percent of U.S. oil already comes from usingenhanced oil recovery, a number that will increase, according to the U.S. Energy Information Administration.

Still, the economic and technological challenges facing CCS are daunting. Much-heralded projects like the CO2 capture and storage demonstration at the Mountaineer Power Plant in West Virginia were abandoned because no one wanted to pay for it. The hardware sits unused next to the hulking power plant’s smokestacks and cooling towers. 

The ultimate challenge is that capturing CO2 from a smokestack costs more than simply dumping it into the atmosphere. Analysts say the simplest way to encourage less pollution and more CO2 capture would be to charge for the privilege of emitting CO2 by imposing a tax on carbon emissions. A price on CO2, if high enough, might make capturing the greenhouse gas look cheap.

Even if that policy change happens, the problem of storing all that CO2 remains, including concerns that the CO2 could escape back into the atmosphere or cause earthquakes. In Algeria, a test to store nearly 4 million metric tons of injected CO2 underground was halted after the gas raised the overlying rock and fractured it. Concerns over such induced seismicity or accidental releases of CO2 have blocked CCS plans in Europe, as have concerns over how to ensure the stored CO2 stays put for millennia.

But storing CO2 underground can work, as Norway’s Sleipner project in the North Sea has demonstrated. At Sleipner, which started capturing and storing CO2 in 1996, more than 16 million metric tons of CO2 have been put in an undersea sandstone formation; the project is funded by Norway’s carbon tax. And around the world, the potential storage resource is gargantuan. The U.S. alone has an estimated 4 trillion metric tons of CO2 storage capacity in the form of porous sandstones or saltwater aquifers, according to the U.S. Department of Energy.

Scientists at Columbia’s Lamont-Doherty Earth Observatory and elsewhere are investigating just how vast the storage potential under the ocean could be. David Goldberg, a marine geophysicist at Lamont, proposes that liquid CO2 could be pumped offshore and injected into the ubiquitous basalt formations found off many of the world’s coastlines. When mixed with water, the CO2 leaches metals out of the basalt and forms a carbonate chalk, Goldberg explains. 

“The goal of the whole CCS exercise is to take CO2, which is volatile, and put it in solid form where it will stay locked away forever,” he adds. Goldberg has calculated that just one such ridge site that runs the north-south length of the Atlantic Ocean could theoretically store all of humanity’s excess CO2 emissions to date. “The magic of being offshore is that you are away from people and away from property.”

There is also basalt on land. In an experiment in Iceland, more than 80 percent of the injected CO2 interacted with the surrounding basalt and converted to rock in less than a year. A similar experiment in Washington State achieved similar results.

In the end, getting off fossil fuels entirely is the only way to control CO2 pollution. But until that happens, CCS could be vital to stave off catastrophic climate change. “Ultimately, we need a thermostat on this planet,” says Klaus Lackner, a Columbia University physicist who is working on pulling the greenhouse gas directly out of the air rather than capturing it from smokestacks. “And we need to control the CO2.”

Correction, September 9, 2014: Previous versions of this article misstated the amount of CO2 storage capacity in porous sandstones or saltwater aquifers in the U.S.; it is 4 trillion metric tons.

World on track to be 4C warmer by 2100 because of missed carbon targets (The Guardian)

Concerns about the short term costs and impacts of investment to address risks is paralysing action on climate changeJonathan Grant

Guardian Professional, Monday 8 September 2014 13.28 BSTHeavy rains in Albuquerque, New Mexico

Heavy rains in Albuquerque, New Mexico. The top 10 destinations for the UK’s foreign direct investment experienced almost $100bn worth of extreme weather losses in 2013. Photograph: Roberto Rosales/AP

Global ambitions to reduce emissions are becoming a bit like the resolutions we make to give something up at new year: the intention is sincere, but we don’t always deliver.For the sixth successive year of the PwC Low Carbon Economy Index, the global carbon target has been missed. And inadequate action today means that even steeper reductions are needed in the future. The target is based on projections of economic growth and the global carbon budget set out by the Intergovernmental Panel on Climate Change (IPCC) which gives a reasonable probability of limiting warming to 2C.

Globally, annual reductions need to be five times current levels, averaging 6.2% a year, every year from now to 2100, compared with 1.2% today. At the national level, Australia is at the top of our decarbonisation league of G20 nations, followed by the UK. Both countries had a strong increase in renewable generation, albeit from a low base, combined with slight a reduction in coal use. The US was nearer the bottom as coal use bounced back, retaking a share of the electricity mix from shale gas.

The world is currently on track to burn this century’s IPCC carbon budget within 20 years, and a pathway to 4C of global warming by 2100. For many of us, 2034 is within our working lifetime. It’s within the timeframe of decisions being made today, on long-term investments, on the location of factories and their supply chains. So businesses are making those decisions faced with uncertainty about climate policy and potential impacts of climate change.

It is clear that the gap between what governments are saying about climate change and what they are doing about it continues to widen. While they talk about two degrees at the climate negotiations, the current trend is for a 4C world.

There is little mention of these two degrees of separation in the negotiations, in policy documents, in business strategies or in board rooms. Operating in a changing climate is becoming a very real challenge for UK plc. Some of the biggest names in business are mapping the risks posed by a changing climate to their supply chain, stores, offices and people.

But while the findings question the reality of the 2C target in negotiations, consider two situations in the analysis that demonstrates the strong case for the negotiations’ role in focusing everyone on co-ordinated action on climate change.

First, our analysis shows that the top 10 destinations for the UK’s foreign direct investment in 2011 were exposed to almost $100bn worth of extreme weather losses in 2013. Multi-billion pound UK investments are wrapped up in transport, technology, retail, food and energy sectors, making this an issue on everyone’s doorstep.

Second, co-ordinated, ambitious action to tackle emissions growth should protect business in the long term. It could even be a boost to growth. It would avoid inevitable short-term decisions that may look attractive, such as shutting down a steel operation in a country with a high cost of carbon to move it to another with a lower cost, but merely relocate emissions. And take jobs with them.

The concern about short-term costs and impacts on investment is paralysing our ability to address the long-term climate risks. Perhaps competitiveness is the new climate scepticism. Businesses call for a level playing field on carbon pricing, when it should be seen in the wider context of labour and energy prices, the skills market and wider legislative environment.

There’s a danger when we talk in small numbers – whether they are one or two degrees, or the 6% now required in annual decarbonisation (every year for the next 66 years, by the way), that they sound manageable. The 6% figure is double the rate the UK achieved when we dashed for gas in the 1990s. A shale gas revolution might help, but would need to be accompanied by a revolution in carbon capture and storage and revolutions in renewables, in electric transport, in industrial processes and in our buildings.

The UK’s results are encouraging, even if they fall short of the overall target necessary. Leadership in low carbon for the UK is down in part to policies and investment, partly the structure of our economy, and partly traditional factors such as skills and education. But it’s notable that while the Low Carbon Economy Index shows that the UK’s carbon intensity is lower than many, it is still higher than in France, Argentina or Brazil. It’s a neat encapsulation of a view of the world through a low carbon economy lens, not just a GDP one. The UK’s competitiveness or attractiveness today needs investment to hold on to it for tomorrow.

Jonathan Grant is director, sustainability and climate change, PwC

World falls behind in efforts to tackle climate change: PwC (Reuters)

LONDON Sun Sep 7, 2014 6:24pm EDT

(Reuters) – The world’s major economies are falling further behind every year in terms of meeting the rate of carbon emission reductions needed to stop global temperatures from rising more than 2 degrees this century, a report published on Monday showed.

The sixth annual Low Carbon Economy Index report from professional services firm PwC looked at the progress of major developed and emerging economies toward reducing their carbon intensity, or emissions per unit of gross domestic product.

“The gap between what we are achieving and what we need to do is growing wider every year,” PwC’s Jonathan Grant said. He said governments were increasingly detached from reality in addressing the 2 degree goal.

“Current pledges really put us on track for 3 degrees. This is a long way from what governments are talking about.”

Almost 200 countries agreed at United Nations climate talks to limit the rise in global temperatures to less than 2 degrees Celsius (3.6 Fahrenheit) above pre-industrial times to limit heat waves, floods, storms and rising seas from climate change. Temperatures have already risen by about 0.85 degrees Celsius.

Carbon intensity will have to be cut by 6.2 percent a year to achieve that goal, the study said. That compares with an annual rate of 1.2 percent from 2012 to 2013.

Grant said that to achieve the 6.2 percent annual cut would ‎require changes of an even greater magnitude than those achieved by recent major shifts in energy production in some countries.

France’s shift to nuclear power in the 1980s delivered a 4 percent cut, Britain’s “dash for gas” in the 1990s resulted in a 3 percent cut and the United States shale gas boom in 2012 led to a 3.5 percent cut.

GLIMMER OF HOPE

PwC said one glimmer of hope was that for the first time in six years emerging economies such as China, India and Mexico had cut their carbon intensity at a faster rate than industrialized countries such as the United States, Japan and the European Union.

As the manufacturing hubs of the world, the seven biggest emerging nations have emissions 1.5-times larger than those of the seven biggest developed economies and the decoupling of economic growth from carbon emissions in those nations is seen as vital.

Australia had the highest rate of decarbonization for the second year in a row, cutting its carbon intensity by 7.2 percent over 2013.

Coal producer Australia has one of the world’s highest rates of emissions per person but its efforts to rein in the heat-trapping discharges have shown signs of stalling since the government in July repealed a tax on emissions.

Britain, Italy and China each achieved a decarbonization rate of 4-5 percent, while five countries increased their carbon intensity: France, the United States, India, Germany and Brazil.

United Nations Secretary General Ban Ki-moon hopes to gather more than 100 world leaders in New York on September 23 to reinvigorate efforts to forge a global climate deal.

(Reporting by Ben Garside. Editing by Jane Merrman)

 

New York summit is last chance to get consensus on climate before 2015 talks (The Guardian)

UN is trying to convince countries to make new pledges before they meet in Paris to finalise a new deal on cutting emissions, reports

Paul Brown for Climate News Network, part of the Guardian Enviornment Network

theguardian.com, Thursday 4 September 2014 14.48 BST

Ban Ki-moonUN secretary general Ban Ki-moon has invited world leaders to New York on 23 September for a climate summit. Photograph: David Rowland/AFP/Getty Images

It is widely acknowledged that the planet’s political leaders and its people are currently failing to take enough action to prevent catastrophic climate change.

Next year, at the United Nations climate change conference in Paris, representatives of all the world’s countries will be hoping to reach a new deal to cut greenhouse gases and prevent the planet overheating dangerously. So far, there are no signs that their leaders have the political will to do so.

To try to speed up the process, the UN secretary general, Ban Ki-moon, has invited world leaders to UN headquarters in New York on 23 September for a grandly-named Climate Summit 2014.

He said at the last climate conference, in Warsaw last year, that he is deeply concerned about the lack of progress in signing up to new legally-binding targets to cut emissions.

If the summit is a success, then it means a new international deal to replace the Kyoto protocol will be probable in late 2015 in Paris. But if world leaders will not accept new targets for cutting emissions, and timetables to achieve them, then many believe that political progress is impossible.

Ban Ki-moon’s frustration about lack of progress is because politicians know the danger we are in, yet do nothing. World leaders have already agreed that there is no longer any serious scientific argument about the fact that the Earth is heating up and if no action is taken will exceed the 2C danger threshold.

It is also clear, Ban Ki-moon says, that the technologies already exist for the world to turn its back on fossil fuels and cut emissions of greenhouse gases to a safe level.

What the major countries cannot agree on is how the burden of taking action should be shared among the world’s 196 nations.

Ban Ki-moon already has the backing of more than half the countries in the world for his plan. These are the most vulnerable to climate change, and most are already being seriously affected.

More than 100 countries meeting in Apia, Samoa, at the third UN conference on small island developing states, in their draft final statement, note with “grave concern” that world leaders’ pledges on the mitigation of greenhouse gases will not save them from catastrophic sea level rise, droughts, and forced migration. “We express profound alarm that emissions of greenhouse gases continue to rise globally.”

Many of them have long advocated a maximum temperature rise of 1.5C to prevent disaster for the most vulnerable nations, such as the Marshall Islands and the Maldives.

The draft ministerial statement says: “Climate change is one of the greatest challenges of our time, and we express profound alarm that emissions of greenhouse gases continue to rise globally.

“We are deeply concerned that all countries, particularly developing countries, are vulnerable to the adverse impacts of climate change and are already experiencing an increase in such impacts, including persistent drought and extreme weather events, sea level rise, coastal erosion and ocean acidification, further threatening food security and efforts to eradicate poverty and achieve sustainable development.”

Speaking from Apia, Shirley Laban, the convenor of the Pacific Islands Climate Action Network, an NGO, said: “Unless we cut emissions now, and limit global warming to less than 1.5C, Pacific communities will reap devastating consequences for generations to come. Because of pollution we are not responsible for, we are facing catastrophic threats to our way of life.”

She called on all leaders attending the UN climate summit in New York to “use this historic opportunity to inject momentum into the global climate negotiations, and work to secure an ambitious global agreement in 2015”.

This is a tall order for a one-day summit, but Ban Ki-moon is expecting a whole series of announcements by major nations of new targets to cut greenhouse gases, and timetables to reach them.

There are encouraging signs in that the two largest emitters – China and the US – have been in talks, and both agree that action is a must. Even the previously reluctant Republicans in America now accept that climate change is a danger.

It is not yet known how many heads of state will attend the summit in person, or how many will be prepared to make real pledges.

At the end of the summit, the secretary general has said, he will sum up the proceedings. It will be a moment when many small island states and millions of people around the world will be hoping for better news.

Activists promise biggest climate march in history (The Guardian)

People’s Climate March in New York and cities worldwide hopes to put pressure on heads of state at Ban Ki-moon summit

theguardian.com, Monday 8 September 2014 06.00 BST

People's Climate March advert to be put up on the London Underground tube network.People’s Climate March advert to be put up on the London Underground tube network.Photograph: Avaaz

Hundreds of thousands of people are expected to take to the streets of New York, London and eight other cities worldwide in a fortnight to pressure world leaders to take action on global warming, in what organisers claim will be the biggest climate march in history.

On 23 September, heads of state will join a New York summit on climate change organised by Ban Ki-moon, the first time world leaders have come together on the issue since the landmark Copenhagen summit in 2009, which was seen as a failure.

The UN secretary general hopes the meeting will inject momentum into efforts to reach a global deal on cutting greenhouse gas emissions by the end of 2015, at a conference in Paris.

Ricken Patel, executive director of digital campaign group Avaaz, one of the organisers of the People’s Climate March on 21 September, said the demonstration was intended to send a signal to those world leaders, who are expected to include David Cameron and Barack Obama, though not heads of state from China and India.

“We in the movement, activists, have failed up until this point to put up a banner and say if you care about this, now is the time, here is the place, let’s come together, to show politicians the political power that is out there on there. Our goal is to mobilise the largest climate change mobilisation in history and the indications are we’re going to get there,” he told the Guardian.

Patel said he expects more than a hundred thousand people at the New York march alone, which will be the focus of the day’s events. Although many of the hundreds of organisations that have committed to taking part are environmental groups, he said not all those attending would be traditional ‘green’ activists.

“There’s a very strong range and diversity of people from all walks of life, including immigrant rights groups, social justice groups. Whoever you are and wherever you are, climate change threatens us all so it brings us together.”

Nearly 400,000 have signed a call on Avaaz’s site, saying they will attend one of the global events, which also include marches in Berlin, Paris, Delhi, Rio and Melbourne.

Patel added: “We’re building for the longterm here. This is about launching a movement that can literally save the world over the longterm. We want to build to last. We recognise that at this stage what needs be done is build political momentum behind this issue – our governments are nowhere near even the planning to reach the agreements needed to keep warming below [temperature rises of] 2C.”

Around 500 adverts will appear on the London tube network from Monday, calling on people to join the march, and advertising has already appeared across the New York subway. In Rio, the organisers have permission to project messages about the march on to the statue of Christ.

50,000 demand action on climate change at The Wave,   biggest ever UK climate change March in London. 5 December 2009.
Thousands of people had taken part in the 2009 climate march in London.Photograph: Janine Wiedel/Alamy

In an open letter to be published this week, environment and development groups including Greenpeace, Oxfam and WWF, plus politicians including Green party MP Caroline Lucas and Labour MP Tom Watson, have joined with trade unions and faith groups to call on world leaders to use the UN summit to take action on climate change.

“Politicians all over the world cite a lack of public support as a reason not to take bold action against climate change. So on 21 September we will meet this moment with unprecedented public mobilisations in cities around the world, including thousands of people on the streets of London.

“Our goal is simple – to demonstrate the groundswell of demand that exists for ambitious climate action,” they write.

Celebrities backing the People’s Climate March include model Helena Christensen, musician Peter Gabriel, actor Susan Sarandon, Argentine footballer Lionel Messi and actor Edward Norton.

The previous biggest assembly for a climate march was in Copenhagen in 2009, when tens of thousands of people took to the streets.

Separately on Monday, NGOs Greenpeace, WWF, Green Alliance, RSPB and Christian Aid published a report, Paris 2015: Getting a global agreement on climate change, laying out the level of ambition required for a deal at the UN climate talks in Paris.

Matthew Spencer, Green Alliance’s director, said: “There is a fashionable pessimism about multilateralism which shields people from disappointment but does nothing to protect us from the insecurity that climate change is bringing. Only a strong international agreement can avoid that and give nation states the confidence that they will not be alone as they decarbonise their energy systems.”

Racionamento de água ‘não é culpa de São Pedro’, diz ONU (OESP)

09/09/2014, 12h41

4.set.2014 – Represa Jaguari-Jacareí, na cidade de Joanópolis, no interior de São Paulo, teve o índice que mede o volume de água armazenado no Sistema Cantareira alcançando a marca de apenas 10,6% da capacidade total. Luis Moura/ Estadão Conteúdo

O racionamento de água em São Paulo não é culpa de São Pedro, mas, sim, das autoridades, da Companhia de Saneamento Básico do Estado de São Paulo (Sabesp) e da falta de investimentos. Quem faz o alerta é a relatora da Organização das Nações Unidas (ONU) para o direito à água, a portuguesa Catarina Albuquerque, que apresentou nesta terça-feira, 9, diante da entidade, um informe em que acusa o governo brasileiro de não estar cumprindo seu dever de garantir o acesso à água à totalidade da população.

Crise no abastecimento

“O culpado parece ser sempre São Pedro”, ironizou em declarações ao jornal O Estado de S. Paulo. “Concordo que a seca pode ser importante. Mas o racionamento de água precisa ser previsto e os investimentos necessários precisam ser feitos”, disse. “A responsabilidade é do Estado, que precisa garantir investimentos em momentos de abundância”, insistiu.

Segundo ela, o racionamento de fato pode ser necessário em algumas situações. “Mas apenas como última opção e depois que as demais opções tenham sido esgotadas”, alertou.

Reservatórios de água na Grande SP

 
Arte/UOL

Confira entre quais reservatórios se divide o abastecimento de água na Grande São Paulo

Raio-x dos sistemas

Para a relatora da ONU, não faz sentido a Sabesp ter suas ações comercializadas na Bolsa de Nova York e na Bolsa de Valores de São Paulo (Bovespa), enquanto a cidade convive com problemas. “Antes de repartir lucros, a empresa precisa investir para garantir que todos tenham acesso à água”, declarou.

“O número de pessoas vivendo sem acesso à água e saneamento às sombras de uma sociedade que se desenvolve rapidamente ainda é enorme”, declarou a relatora em seu discurso na ONU, nesta tarde em Genebra.

Segundo seu informe, um abastecimento de água regular e de qualidade ainda é uma realidade distante para 77 milhões de brasileiros, uma população equivalente a todos os habitantes da Alemanha.

21.ago.2014 – Escavadeira tenta retirar lixo do rio Tietê. Menos de um mês depois de finalizar uma operação que retirou mais de 18 toneladas de lixo de áreas do leito seco do rio Tietê, o município de Salto (a 101 km da capital paulista) começa a ver as áreas serem novamente tomadas por entulho. A maioria do material é de pedaços de madeira. Segundo o secretário de Meio Ambiente do município, João De Conti Neto, a sujeira está voltando pela correnteza. João De Conti Neto/Acervo Pessoal

A ONU ainda aponta que 60% da população – 114 milhões de pessoas – “não tem uma solução sanitária apropriada”. Os dados ainda revelam que 8 milhões de brasileiros ainda precisam fazer suas necessidades ao ar livre todos os dias.

O Estadão revelou em junho de 2013 que a representante das Nações Unidas teve sua primeira inspeção para realizar o levantamento vetada pelo governo. A visita estava programada para ocorrer em julho do ano passado. “O governo apenas explicou que, por motivos imprevistos, a missão não poderia mais ocorrer”, declarou à época Catarina de Albuquerque.

Internamente, a ONU considerou que o veto tinha uma relação direta com os protestos que, em 2013, marcaram as cidades brasileiras. A viagem só aconteceria em dezembro de 2013, o que impediria que o informe produzido fosse apresentado aos demais governos da ONU e à sociedade civil antes da Copa do Mundo.

Agora, o raio X reflete uma crise que vive o País no que se refere ao acesso a água e saneamento. “Milhões de pessoas continuam vivendo em ambientes insalubres, sem acesso à água e ao saneamento”, indicou o informe, apontando que o maior problema estaria nas favelas e nas zonas rurais.

Resposta

O governo brasileiro indicou que o acesso à água e ao saneamento é “uma prioridade”, que a população mais pobre recebe uma atenção especial e que o governo tem “aumentado de forma significativa os investimentos em saneamento ao transferir recursos para Estados e municípios”.

“Houve um aumento nos orçamentos de fundos especiais para promover investimentos em infraestrutura de água e saneamento”, indicou a embaixadora do Brasil na ONU, Regina Dunlop.

“Temos um compromisso com a eliminação de desigualdades, dando prioridades para os mais vulneráveis”, insistiu a diplomata, indicando que as populações das favelas não são esquecidas.

Entre as medidas, a diplomata aponta investimentos de R$ 21,5 bilhões pelo governo em moradia, acesso à água, serviços de esgoto e revitalização urbana.

O governo também sugere que a relatora fizesse uma viagem mais ampla ao Brasil e alerta para a dimensão do território nacional.

 

How conversion of forests to cropland affects climate (Science Daily)

Date: September 8, 2014

Source: Yale School of Forestry & Environmental Studies

Summary: The conversion of forests into cropland worldwide has triggered an atmospheric change to emissions of biogenic volatile organic compounds that — while seldom considered in climate models — has had a net cooling effect on global temperatures, according to a new study.


Since the mid-19th century, the percentage of the planet covered by cropland has more than doubled, from 14 percent to 37 percent. Credit: © Dusan Kostic / Fotolia 

The conversion of forests into cropland worldwide has triggered an atmospheric change that, while seldom considered in climate models, has had a net cooling effect on global temperatures, according to a new Yale study.

Writing in the journal Nature Climate Change, Professor Nadine Unger of the Yale School of Forestry & Environmental Studies (F&ES) reports that large-scale forest losses during the last 150 years have reduced global emissions of biogenic volatile organic compounds (BVOCs), which control the atmospheric distribution of many short-lived climate pollutants, such as tropospheric ozone, methane, and aerosol particles.

Using sophisticated climate modeling, Unger calculated that a 30-percent decline in BVOC emissions between 1850 and 2000, largely through the conversion of forests to cropland, produced a net global cooling of about 0.1 degrees Celsius. During the same period, the global climate warmed by about 0.6 degrees Celsius, mostly due to increases in fossil fuel carbon dioxide emissions.

According to her findings, the climate impact of declining BVOC emissions is on the same magnitude as two other consequences of deforestation long known to affect global temperatures, although in opposing ways: carbon storage and the albedo effect. The lost carbon storage capacity caused by forest conversion has exacerbated global warming. Meanwhile, the disappearance of dark-colored forests has also helped offset temperature increases through the so-called albedo effect. (The albedo effect refers to the amount of radiation reflected by the surface of the planet. Light-colored fields, for instance, reflect more light and heat back into space than darker forests.)

Unger says the combined effects of reduced BVOC emissions and increased albedo may have entirely offset the warming caused by the loss of forest-based carbon storage capacity.

“Land cover changes caused by humans since the industrial and agricultural revolutions have removed natural forests and grasslands and replaced them with croplands,” said Unger, an assistant professor of atmospheric chemistry at F&ES. “And croplands are not strong emitters of these BVOCs — often they don’t emit any BVOCs.”

“Without doing an earth-system model simulation that includes these factors, we can’t really know the net effect on the global climate. Because changes in these emissions affect both warming and cooling pollutants,” she noted.

Unger said the findings do not suggest that increased forest loss provides climate change benefits, but rather underscore the complexity of climate change and the importance of better assessing which parts of the world would benefit from greater forest conservation.

Since the mid-19th century, the percentage of the planet covered by cropland has more than doubled, from 14 percent to 37 percent. Since forests are far greater contributors of BVOC emissions than crops and grasslands, this shift in land use has removed about 30 percent of Earth’s BVOC sources, Unger said.

Not all of these compounds affect atmospheric chemistry in the same way. Aerosols, for instance, contribute to global “cooling” since they generally reflect solar radiation back into space. Therefore, a 50 percent reduction in forest aerosols has actually spurred greater warming since the pre-industrial era.

However, reductions in the potent greenhouse gases methane and ozone — which contribute to global warming — have helped deliver a net cooling effect.

These emissions are often ignored in climate modeling because they are perceived as a “natural” part of Earth system, explained Unger. “So they don’t get as much attention as human-generated emissions, such as fossil fuel VOCs,” she said. “But if we change how much forest cover exists, then there is a human influence on these emissions.”

These impacts have also been ignored in previous climate modeling, she said, because scientists believed that BVOC emissions had barely changed between the pre-industrial era and today. But a study published last year by Unger showed that emissions of these volatile compounds have indeed decreased. Studies by European scientists have produced similar results.

The impact of changes to ozone and organic aerosols are particularly strong in temperate zones, she said, while methane impacts are more globally distributed.

The sensitivity of the global climate system to BVOC emissions suggests the importance of establishing a global-scale long-term monitoring program for BVOC emissions, Unger noted.

 

Journal Reference:

  1. Nadine Unger. Human land-use-driven reduction of forest volatiles cools global climate. Nature Climate Change, 2014; DOI: 10.1038/nclimate2347

Study traces ecological collapse over 6,000 years of Egyptian history (Science Daily)

Date: September 8, 2014

Source: University of California – Santa Cruz

Summary: Depictions of animals in ancient Egyptian artifacts have helped scientists assemble a detailed record of the large mammals that lived in the Nile Valley over the past 6,000 years. A new analysis of this record shows that species extinctions, probably caused by a drying climate and growing human population in the region, have made the ecosystem progressively less stable. 

Carved rows of animals, including elephants, lions, a giraffe, and sheep, cover both sides of the ivory handle of a ritual knife from the Predynastic Period in Egypt. Credit: Charles Edwin Wilbour Fund, Brooklyn Museum

Depictions of animals in ancient Egyptian artifacts have helped scientists assemble a detailed record of the large mammals that lived in the Nile Valley over the past 6,000 years. A new analysis of this record shows that species extinctions, probably caused by a drying climate and growing human population in the region, have made the ecosystem progressively less stable.

The study, published September 8 in Proceedings of the National Academy of Sciences (PNAS), found that local extinctions of mammal species led to a steady decline in the stability of the animal communities in the Nile Valley. When there were many species in the community, the loss of any one species had relatively little impact on the functioning of the ecosystem, whereas it is now much more sensitive to perturbations, according to first author Justin Yeakel, who worked on the study as a graduate student at the University of California, Santa Cruz, and is now a postdoctoral fellow at the Santa Fe Institute.

Around six millennia ago, there were 37 species of large-bodied mammals in Egypt, but only eight species remain today. Among the species recorded in artwork from the late Predynastic Period (before 3100 BC) but no longer found in Egypt are lions, wild dogs, elephants, oryx, hartebeest, and giraffe.

“What was once a rich and diverse mammalian community is very different now,” Yeakel said. “As the number of species declined, one of the primary things that was lost was the ecological redundancy of the system. There were multiple species of gazelles and other small herbivores, which are important because so many different predators prey on them. When there are fewer of those small herbivores, the loss of any one species has a much greater effect on the stability of the system and can lead to additional extinctions.”

The new study is based on records compiled by zoologist Dale Osborne, whose 1998 book The Mammals of Ancient Egypt provides a detailed picture of the region’s historical animal communities based on archaeological and paleontological evidence as well as historical records. “Dale Osborne compiled an incredible database of when species were represented in artwork and how that changed over time. His work allowed us to use ecological modeling techniques to look at the ramifications of those changes,” Yeakel said.

The study had its origins in 2010, when Yeakel visited a Tutankhamun exhibition in San Francisco with coauthor Nathaniel Dominy, then an anthropology professor at UC Santa Cruz and now at Dartmouth. “We were amazed at the artwork and the depictions of animals, and we realized they were recording observations of the natural world. Nate was aware of Dale Osborne’s book, and we started thinking about how we could take advantage of those records,” Yeakel said.

Coauthor Paul Koch, a UCSC paleontologist who studies ancient ecosystems, helped formulate the team’s approach to using the records to look at the ecological ramifications of the changes in species occurrences. Yeakel teamed up with ecological modelers Mathias Pires of the University of Sao Paolo, Brazil, and Lars Rudolf of the University of Bristol, U.K., to do a computational analysis of the dynamics of predator-prey networks in the ancient Egyptian animal communities.

The researchers identified five episodes over the past 6,000 years when dramatic changes occurred in Egypt’s mammalian community, three of which coincided with extreme environmental changes as the climate shifted to more arid conditions. These drying periods also coincided with upheaval in human societies, such as the collapse of the Old Kingdom around 4,000 years ago and the fall of the New Kingdom about 3,000 years ago.

“There were three large pulses of aridification as Egypt went from a wetter to a drier climate, starting with the end of the African Humid Period 5,500 years ago when the monsoons shifted to the south,” Yeakel said. “At the same time, human population densities were increasing, and competition for space along the Nile Valley would have had a large impact on animal populations.”

The most recent major shift in mammalian communities occurred about 100 years ago. The analysis of predator-prey networks showed that species extinctions in the past 150 years had a disproportionately large impact on ecosystem stability. These findings have implications for understanding modern ecosystems, Yeakel said.

“This may be just one example of a larger pattern,” he said. “We see a lot of ecosystems today in which a change in one species produces a big shift in how the ecosystem functions, and that might be a modern phenomenon. We don’t tend to think about what the system was like 10,000 years ago, when there might have been greater redundancy in the community.”

 

Journal Reference:

  1. Justin D. Yeakel, Mathias M. Pires, Lars Rudolf, Nathaniel J. Dominy, Paul L. Koch, Paulo R. Guimarães, Jr., and Thilo Gross. Collapse of an ecological network in Ancient Egypt. PNAS, September 8, 2014 DOI: 10.1073/pnas.1408471111

Forming consensus in social networks (Science Daily)

Date: September 3, 2014

Source: University of Miami

Summary: To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations.


Social networks have become a dominant force in society. Family, friends, peers, community leaders and media communicators are all part of people’s social networks. Individuals within a network may have different opinions on important issues, but it’s their collective actions that determine the path society takes.

To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations. The findings are published in the August 2014 issue on Signal Processing for Social Networks of the IEEE Journal of Selected Topics in Signal Processing.

“We wanted to provide a new method for studying the exchange of opinions and evidence in networks,” said Kamal Premaratne, professor of electrical and computer engineering, at the University of Miami (UM) and principal investigator of the study. “The new model helps us understand the collective behavior of adaptive agents–people, sensors, data bases or abstract entities–by analyzing communication patterns that are characteristic of social networks.”

The model addresses some fundamental questions: what is a good way to model opinions and how these opinions are updated, and when is consensus reached.

One key feature of the new model is its capacity to handle the uncertainties associated with soft data (such as opinions of people) in combination with hard data (facts and numbers).

“Human-generated opinions are more nuanced than physical data and require rich models to capture them,” said Manohar N. Murthi, associate professor of electrical and computer engineering at UM and co-author of the study. “Our study takes into account the difficulties associated with the unstructured nature of the network,” he adds. “By using a new ‘belief updating mechanism,’ our work establishes the conditions under which agents can reach a consensus, even in the presence of these difficulties.”

The agents exchange and revise their beliefs through their interaction with other agents. The interaction is usually local, in the sense that only neighboring agents in the network exchange information, for the purpose of updating one’s belief or opinion. The goal is for the group of agents in a network to arrive at a consensus that is somehow ‘similar’ to the ground truth — what has been confirmed by the gathering of objective data.

In previous works, consensus achieved by the agents was completely dependent on how agents update their beliefs. In other words, depending on the updating scheme being utilized, one can get different consensus states. The consensus in the current model is more rational or meaningful.

“In our work, the consensus is consistent with a reliable estimate of the ground truth, if it is available,” Premaratne said. “This consistency is very important, because it allows us to estimate how credible each agent is.”

According to the model, if the consensus opinion is closer to an agent’s opinion, then one can say that this agent is more credible. On the other hand, if the consensus opinion is very different from an agent’s opinion, then it can be inferred that this agent is less credible.

“The fact that the same strategy can be used even in the absence of a ground truth is of immense importance because, in practice, we often have to determine if an agent is credible or not when we don’t have knowledge of the ground truth,” Murthi said.

In the future, the researchers would like to expand their model to include the formation of opinion clusters, where each cluster of agents share similar opinions. Clustering can be seen in the emergence of extremism, minority opinion spreading, the appearance of political affiliations, or affinity for a particular product, for example.

 

Journal Reference:

  1. Thanuka L. Wickramarathne, Kamal Premaratne, Manohar N. Murthi, Nitesh V. Chawla. Convergence Analysis of Iterated Belief Revision in Complex Fusion Environments. IEEE Journal of Selected Topics in Signal Processing, 2014; 8 (4): 598 DOI: 10.1109/JSTSP.2014.2314854

In our world beyond nations, the future is medieval (New Scientist)

04 September 2014

Magazine issue 2985

Islamic State is more like a postmodern network than a nation state – so we’ll need new tactics to deal with it

FOR most of the past thousand years, there were no nations in Europe. It was a hotchpotch of tribal groupings, feudal kingdoms, autonomous cities and trading networks. Over time, the continent’s ever more complex societies and industries required ever more complex governance; with the French Revolution, the modern nation state was born.

Now the nation’s time may be drawing to a close, according to those who look at society through the lenses of complexity theory and human behaviour. There is plentiful evidence for this once you start looking (see “End of nations: Is there an alternative to countries?Movie Camera“). Consider the European Union, which is trying – much to the disapproval of many Europeans – to transcend its member nations.

Is this a prospect to welcome or dread? One possible reaction is a resurgence of nationalism, based in the desire to consolidate a perceived common identity. Russia’s bellicosity in eastern Ukraine, for example, was supposedly intended to protect the interests of Russian speakers – a transnational act in itself.

Some believe, instead, that the medieval way of running things is due for a comeback. For much of the Middle Ages, power was wielded by city states, like Florence and Hamburg, and by mercantile associations like the Hanseatic League. Reinventing this system might not sound like progress, especially to those who mistrust the overweening power of cities like London or bodies like the World Trade Organization, but it has its pluses. The governors of big cities oversee most of the world’s inhabitants, share many concerns and are often freer to act than national governments.

Small nations could also thrive, particularly if they distinguish themselves through high-tech expertise (New Scientist, 31 May 2014, p 12). Witness how talk of “going it alone” around the imminent Scottish referendum has often segued into talk of how a politically independent Scotland could maintain its links with England and the EU.

But post-nationalism has its ugly side, too. Islamic State, the extremist movement which has overrun northern Iraq and Syria, is usually described as medieval in a pejorative sense. But it is also hyper-modern, interested in few of the trappings of a conventional state apart from its own brutal brand of law enforcement. In fact, it is more of a network than a nation, having made canny use of social media to exert influence far beyond its geographical base.

Confronted with this post-national threat, the world’s most powerful nations have reacted with something approaching stunned silence. “We have no strategy,” said US president Barack Obama in a rare gaffe. The British government has resorted to “royal prerogative” – a medieval legal instrument if ever there was one – to provide a pretext for controlling the movements of British jihadis. It remains to be seen if this will work: any such action is fraught with complexity under international law.

Thirteen years ago this month, Al-Qaida’s attack on the World Trade Center demonstrated the shortcomings of conventional defences in the face of 21st-century threats. The response was a radical reshaping of the security and military landscape, with effects that are still playing out.

Today, Al-Qaida’s offspring pose a similarly acute challenge to the apparatus of international relations. Even if we decide not to embrace post-nationalism, we’ll have to figure out how to engage with those who do. And we don’t have a thousand years to do it.

This article appeared in print under the headline “State of the nation”

Nudge: The gentle science of good governance (New Scientist)

25 June 2013

Magazine issue 2922

NOT long before David Cameron became UK prime minister, he famously prescribed some holiday reading for his colleagues: a book modestly entitled Nudge.

Cameron wasn’t the only world leader to find it compelling. US president Barack Obama soon appointed one of its authors, Cass Sunstein, a social scientist at the University of Chicago, to a powerful position in the White House. And thus the nudge bandwagon began rolling. It has been picking up speed ever since (see “Nudge power: Big government’s little pushes“).

So what’s the big idea? We don’t always do what’s best for ourselves, thanks to cognitive biases and errors that make us deviate from rational self-interest. The premise of Nudge is that subtly offsetting or exploiting these biases can help people to make better choices.

If you live in the US or UK, you’re likely to have been nudged towards a certain decision at some point. You probably didn’t notice. That’s deliberate: nudging is widely assumed to work best when people aren’t aware of it. But that stealth breeds suspicion: people recoil from the idea that they are being stealthily manipulated.

There are other grounds for suspicion. It sounds glib: a neat term for a slippery concept. You could argue that it is a way for governments to avoid taking decisive action. Or you might be concerned that it lets them push us towards a convenient choice, regardless of what we really want.

These don’t really hold up. Our distaste for being nudged is understandable, but is arguably just another cognitive bias, given that our behaviour is constantly being discreetly influenced by others. What’s more, interventions only qualify as nudges if they don’t create concrete incentives in any particular direction. So the choice ultimately remains a free one.

Nudging is a less blunt instrument than regulation or tax. It should supplement rather than supplant these, and nudgers must be held accountable. But broadly speaking, anyone who believes in evidence-based policy should try to overcome their distaste and welcome governance based on behavioural insights and controlled trials, rather than carrot-and-stick wishful thinking. Perhaps we just need a nudge in the right direction.

No more pause: Warming will be non-stop from now on (New Scientist)

18:00 31 August 2014 by Michael Slezak

Enjoy the pause in global warming while it lasts, because it’s probably the last one we will get this century. Once temperatures start rising again, it looks like they will keep going up without a break for the rest of the century, unless we cut our greenhouse gas emissions.

The slowdown in global warming since 1997 seems to be driven by unusually powerful winds over the Pacific Ocean, which are burying heat in the water. But even if that happens again, or a volcanic eruption spews cooling particles into the air, we are unlikely to see a similar hiatus, according to two independent studies.

Masahiro Watanabe of the University of Tokyo in Japan and his colleagues have found that, over the past three decades, the natural ups and downs in temperature have had less influence on the planet’s overall warmth. In the 1980s, natural variability accounted for almost half of the temperature changes seen. That fell to 38 per cent in the 1990s and just 27 per cent in the 2000s.

Instead, human-induced warming is accounting for more and more of the changes from year to year, says Watanabe. With ever-faster warming, small natural variations have less impact and are unlikely to override the human-induced warming.

“The implication is that we will get fewer hiatus periods, or hiatus periods that last for a shorter period,” says Wenju Cai at the CSIRO in Melbourne, Australia, who wasn’t involved in the work.

Stop it

According to another recent study, the current hiatus may be our last for a while. Matthew England and his colleagues at the University of New South Wales in Sydney, Australia, tried to quantify the chance of another pause. “It’s looking to us that it’s probably going to be the last one that we’ll see in the foreseeable future,” says England.

Using 31 climate models, they showed that if emissions keep rising, the chance of a hiatus – a 10-year period with no significant warming – drops to virtually zero after 2030. The current hiatus will probably be followed by rapid warming as the heat trapped in the ocean escapes back into the atmosphere, so we are unlikely to get another decade of no warming before 2030. England believes it could be another century or more before the next hiatus.

But that could change if we slow greenhouse gas emissions now. If we can reach peak global emissions by 2040, the temperature rise will slow by the end of the century, and hiatus periods will become more likely.

Hiatuses can also be triggered by volcanic eruptions that spew particles into the air, reflecting sunlight away from Earth, as happened after the 1991 Mount Pinatubo eruption. But even if a volcano erupts it will make little difference. “After 2030, the rate of global warming is likely to be so fast that even large volcanic eruptions on the scale of Krakatoa are unlikely to drive a hiatus decade,” says team member Nicola Maher.

Journal references: Watanabe: Nature Climate Change, DOI: 10.1038/nclimate2355; Maher: Geophysical Research Letters, DOI: 10.1002/2014GL060527

United Nations predicts climate hell in 2050 with imagined weather forecasts (The Guardian)

‘Reports from the future’ warn of floods, storms and searing heat in campaign for climate change summit

The Guardian, Monday 1 September 2014 19.18 BST

Texas drought warning

Signs warning of drought and high temperatures in Texas. The UN has predicted climate hell by 2050. Photograph: Mike Stone/Reuters

The United Nations is warning of floods, storms and searing heat fromArizona to Zambia within four decades, as part of a series of imagined weather forecasts released on Monday for a campaign publicising a UN climate summit.

“Miami South Beach is under water,” one forecaster says in a first edition of “weather reports from the future“, a series set in 2050 and produced by companies including Japan’s NHK, the US Weather Channel and ARD in Germany.

The UN World Meteorological Organization, which invited well-known television presenters to make videos to be issued before the summit on 23 September, said the scenarios were imaginary but realistic for a warming world.

A Zambian forecaster, for instance, describes a severe heatwave and an American presenter says: “The mega-drought in Arizona has claimed another casualty.”

Some, however, show extreme change. One Bulgarian presenter shows a red map with temperatures of 50C (122F) – far above the temperature record for the country of 45.2C (113F) recorded in 1916.

Climate change is affecting the weather everywhere. It makes it more extreme and disturbs established patterns. That means more disasters; more uncertainty,” the UN secretary general, Ban Ki-moon, said in a statement.

Ban has asked world leaders to make “bold pledges” to fight climate change at the meeting in New York. The summit is meant as a step towards a deal by almost 200 nations, due by the end of 2015, to slow global warming.

A UN report last year concluded it is at least 95% probable that human activities, rather than natural variations in the climate, are the main cause of global warming since 1950.

Proposta proíbe uso de animais em pesquisa de produtos cosméticos (Agência Senado)

Segunda-feira, 8 de setembro de 2014

Projeto de lei do Senado aguarda a designação de relator na Comissão de Ciência, Tecnologia, Inovação, Comunicação e Informática

Pode ser aprovado ainda este ano o projeto de lei do Senado (PLS 45/2014) que proíbe o uso de animais na pesquisa e no desenvolvimento de produtos cosméticos e de higiene pessoal. A proposta, de autoria do senador licenciado Alvaro Dias (PSDB-PR), aguarda a designação de relator na Comissão de Ciência, Tecnologia, Inovação, Comunicação e Informática (CCT), onde será apreciada em decisão terminativa.

O objetivo de Alvaro Dias altera a lei que estabeleceu procedimentos para o uso científico de animais (Lei 11.794/2008) para vedar “a utilização de animais na pesquisa e no desenvolvimento de produtos cosméticos e de higiene pessoal”.

De acordo com Alvaro Dias, esse tipo de proibição é “uma tendência mundial”, visto que a União Europeia já proibiu essa prática.

“Já existem diversas alternativas para avaliações de segurança nessas pesquisas, a exemplo da modelagem biológica, da modelagem computadorizada e de métodos ‘in vitro’ baseados no cultivo de células, sem a necessidade de submeter animais a procedimentos cruéis”, afirma o senador paranaense na justificação do PLS.

O PLS 45/2014 está sendo analisado em conjunto com o PLS 438/2013, do senador Valdir Raupp (PMDB-RO), que também trata do assunto. A proposta de Raupp muda a mesma lei para determinar que os testes com animais para a produção de cosméticos não são considerados como atividades de pesquisa científica.

Ao justificar seu projeto, Raupp acrescenta que também Índia, Israel e Canadá não aceitam mais testes em cobaias animais para fins cosméticos. No Brasil, informa o senador, a empresa Natura segue as diretrizes da União Europeia e não realiza testes em animais desde 2003.

“Os cosméticos apresentam uma gama maior de métodos que torna possível, em muitos casos, evitar o uso de animais. Nesse sentido, entendemos que os testes de cosméticos em animais é uma prática desnecessária, ultrapassada e notoriamente duvidosa, já que causa sofrimento considerável nos animais”, opina o senador por Rondônia.

(Agência Senado)

Is ontology making us stupid? (Theoria)

By Terence Blake

MARS 8, 2013

Einstein-neutrinos-vitesse-lumiere-0

(This is a translation and expansion of my paper given at Bernard Stiegler’s Summer Academy in August 2012. In it I consider the ontologies…

(This is a translation and expansion of my paper given at Bernard Stiegler’s Summer Academy in August 2012. In it I consider the ontologies of Louis Althusser, of Graham Harman, and of Paul Feyerabend).

Abstract: I begin by “deconstructing” the title and explaining that Feyerabend does not really use the word “ontology”, though he does sometimes call his position ontological realism. I explain that he talks about his position as indifferently a “general methodology” or a “general cosmology”, and that he seems to be be hostile to the very enterprise of ontology, as a separate discipline forming part of what Feyerabend critiques as “school philosophy”. I then go on to say that there is perhaps a concept of a different type of ontology, that I call a “diachronic ontology” that perhaps he would have accepted, and that is very different from ontology as ordinarily thought, which I claim to be synchronic ontology (having no room for the dialogue with Being, but just supposing that Being is already and always there without our contribution). I discuss Althusser and Graham Harman as exemplifying synchronic ontology, giving a reading of Harman’s recent book THE THIRD TABLE. I then discuss Feyerabend’s ideas as showing a different way, that of a diachronic ontology, in which there is no stable framework or fixed path. I end with Andrew Pickering whose essay NEW ONTOLOGIES makes a similar distinction to mine, expressing it in the imagistic terms of a De Kooningian (diachronic) versus a Mondrianesque (synchronic) approach.

A) INTRODUCTION

The question posed in the title, is ontology making us stupid?, is in reference to Nicholas Carr’s book THE SHALLOWS, which is an elaboration of his earlier essy IS GOOGLE MAKING US STUPID?, and I will destroy the suspense by giving you the answer right away: Yes and No. Yes ontology can make us more stupid if it privileges the synchronic, and I will give two examples: (1) the «marxist» ontology of Louis Althusser and (2) the object-oriented ontology of Graham Harman. No, on the contrary, it can make us less stupid, if it privileges the diachronic, and here I will give the example of the pluralist ontology of Paul Feyerabend.

Normally, I should give a little definition of ontology: the study of being as being, or the study of the most fundamental categories of beings, or the general theory of objects and their relations. However, this paper ends with a presentation of the ideas of Paul Feyerabend, and it must be noted that Feyerabend himself does not use the word «ontology», preferring instead to talk, indifferently, of «general cosmology» or of «general methodology». Sometimes as well he talks of the underlying system of categories of a worldview. And towards the end of his life he began to talk of Being with a capital B, but he always emphasized that we should not get hung up on one particular word or approach because there is no «stable framework which encompasses everything», and that any name or argument or approach only «accompanies us on our journey without tying it to a fixed road» (Feyerabend’s Letter to the Reader, Against Method xvi, available here:http://www.kjf.ca/31-C2BOR.htm. Feyerabend explicitly indicated that his own «deconstructive» approach derived from his fidelity to this ambiguity and this fluidity. Thus ontology for Feyerabend implies a journey, ie a process of individuation, without a fixed road and without a stable framework.

As for «stupid», it refers to a process of «stupidification» or dumbing down, of dis-individuation, that tends to impose on us just such a fixed road and stable framework. The word «making» also calls for explanation. We are noetic creatures, and so the good news is that we can never be completely stupid, or completely disindividuated, except in case of brain death. The bad news is that we can always become stupider than we are today, just as we can always become more open, more fluid, more multiple, more differenciated, in short more individuated. Ontology is not a magic wand that can transform us into an animal or a god, but it can favorise one or the other fork of the bifurcation of paths.

ARGUMENT: My argument will be very simple:

    1. traditional ontologies are based on an approach to the real that privileges the synchronic dimension, where the paths are fixed and the framework is stable. Althusser and Harman are good examples of synchronic ontology.
    2. another type of ontology is possible, and it exists sporadically, which privileges the diachronic dimension, and thus the aspects of plurality and becoming, the paths are multiple and the framework is fluid. Feyerabend is a good example of diachronic ontology.

NB: For the sake of brevity, I talk of synchronic and of diachronic ontologies, but in fact each type of ontology contains elements of the other type, and it is simply a matter of the primacy given to the synchronic over the diachronic, or the inverse.

Philosophy is inseparable from a series of radical conversions where our comprehension of all that exists is transformed. In itself, such a capacity for conversion or paradigm change is rather positive. A problem arises when this conversion amounts to a reduction of our vision and to an impoverishment of our life, if it makes us stupid. My conversion to a diachronic ontology took place in 1972, when I read Feyerabend’s AGAINST METHOD (NB : this was the earlier essay version, with several interesting developments that were left out of the book)., where he gives an outline of a pluralist ontology and an epistemology. On reading it I was transported, transformed, converted ; unfortunately, at the same period my philosophy department converted to a very different philosophy – Althusserianism.

B) ALTHUSSER AND ALTHUSSERIANISM

In fact, 1973 was a year that marked a turning point between the “diachronic tempest” of the 60s and the synchronic return to order desired by the Althusserians. I am deliberately using the expression that Bernard Stiegler uses to describe the invention of metaphysics as it was put to work in Plato’s REPUBLIC, in support of a project of synchronisation of minds and behaviours. I was the unwilling and unconsenting witness of an attempt at such a synchronisation on a small scale: my department, the Department of General Philosophy, sank into the dogmatic project, explicitly announced as such, of forming radical (ie Althusserian) intellectuals under the aegis of Althusserian Marxist Science. A small number of Althusserian militants took administrative and intellectual control of the department, and by all sorts of techniques of propaganda, intimidation, harassment and exclusion, forced all its members, or almost all, either to conform to the Althusserian party line or to leave.

Intellectually the Althusserians imposed an onto-epistemological meta-language in terms of which they affirmed the radical difference between science and ideology, and the scientificity of Marxism. It is customary to describe Althusserianism from the epistemological point of view, but it also had an ontological dimension, thanks to its distinction between real objects and theoretical objects: scientific practice produces, according to them, its own objects, theoretical objects, as a means of knowing the real objects. The objects of everyday life, the objects of common sense, and even perceptual objects, are not real objects, but ideological constructions, simulacra (as Harman will later claim, they are “utter shams”).

Faced with this negative conversion of an entire department, I tried to resist. Because I am “counter-suggestible” (as Feyerabend claimed to be) – in other words, because i am faithful to the process of individuation rather than to a party line – I devoted myself to a critique of Althusserianism. Its rudimentary ontology, the determination of Being in terms of real objects, corresponds to a transcendental point of view of first philosophy which acts as a hindrance to scientific practice, and pre-constrains the type of theoretical construction that it can elaborate. To maintain the diachronicity of the sciences one cannot retain the strict demarcation between real objects and theoretical objects, nor between science and ideology. The sciences thus risk being demoted to the same plane as any other ideological construction and having their objects demoted to the status of simulacra. This is a step that the Althusserians did not take, but that, as we shall see, Harman does, thus relieving the sciences of their privileged status.

NB: The set of interviews with Jacques Derrida, POLITICS AND FRIENDSHIP, describes the same phenomenon of intellectual pretention and intimidation supported by a theory having an aura of epistemologica and ontologicall sophistication but which was radically deficient. Derrida emphasises that the concepts of “object” and of “objectivity” were deployed without sufficient analysis of their pertinence nor of their theoretical and practical utility and groundedness.

After the period of Althusserian hegemony came a new period of “diachronic storm”, this time on the intellectual plane. Translations came out of works by Foucault and Derrida, but also of Lyotard and Deleuze. Althusserian dogmas were contested and deconstructed. But for me there still remained serious limitations on thought despite this new sophistication. There was an ontological dimension common to all these authors, and this ontological dimension was either neglected or ignored by the defenders of French Theory. Feyerabend himself seemed to be in need of an ontology to re-inforce his pluralism and to protect it against dogmatic incursions of the Althusserian type and against relativist dissolutions of the post-modern type. I obtained a scholarship to go and study in Paris, and I left Australia in 1980 to continue my ontological and epistemological research.

What I retain from this experience, over and above the need to maintain and to push forward the deconstruction by elaborating a new sort of ontology to accompany its advances, is the feeling of disappointment with the contradictory sophistication in Althusserian philosophy. I had the impression that it pluralised and diachronised with one hand what it reduced and synchronised with the other. Thus, despite its initial show of sophistication it made its acolytes stupid, disindividuated. Further, as an instrument of synchronisation on the large scale it was doomed to failure by its Marxism and its scientism, both of which made securing its general adoption an impossible mission. It would have been necessary to de-marxise and de-scientise its theory to make it acceptable to the greatest number. Further, its diffusion was limited to the academic microcosm, because at that time there was no internet. These limitations to the theory’s propagation (Marxism, scientism, academic confinement) have been deconstructed and overcome by a new philosophical movement, called OOO (object-oriented ontology) which has conquered a new sort of philosophical public. Lastly, I retain a distrust of any “movement” in philosophy, and of the power tactics (propaganda, intimidation, harassment, exclusion) that are inevitably implied. Oblivious to this sort of “wariness” with respect to the sociology of homo academicus, the OOOxians publicise themselves as a movement and attribute the rapid diffusion of their ideas to their mastery of digital social technologies.

C) HARMAN AND OBJECT-ORIENTED ONTOLOGY

In THE THIRD TABLE, Harman gives a brief summary of the principle themes of his object-oriented ontology. It is a little book, published this year in a bilingual (English-German) edition, and theEnglish text occupies a little over 11 pages (p4-15). The content is quite engaging as Harman accomplishes the exploit of presenting his principal ideas in the form of a response to Eddington’s famous “two tables” argument. This permits him toformulate his arguments in terms of a continuous polemic against reductionism in both its humanistic and scientistic forms. All that is fine, so far as it goes. However, problems arise when we examine his presentation of each of Eddington’s two tables, and even more so with his presentation of his own contribution to the discussion: a “third table”, the only real one in Harman’s eyes.

In the introduction to his book THE NATURE OF THE PHYSICAL WORLD (1928), Eddington begins with an apparent paradox: “I have just settled down to the task of writing these lectures and have drawn up my chairs to my two tables. Two tables! Yes; there are duplicates of every object about me two tables, two chairs, two pens” (xi). Eddington explains that there is the familiar object, the table as a substantial thing, solid and reliable,against which I can support myself. But, according to him, modern physics speaks of a quite different table: “My scientific table is mostly emptiness. Sparsely scattered in that emptiness are numerous electric charges rushing about with great speed” (xii). Eddington contrasts the substantiality of the familiar table (a solid thing, easy to visualise as such) and the abstraction of the scientific table (mostly empty space, a set of physical measures related by mathematical formulae). The familiar world of common sense is a world of illusions, whereas the the scientific worl, the only real world according to modern physics, is a world of shadows.

What is the relation between the two worlds? Eddington poses the question and dramatises the divergence between the two worlds, but contrary to what Harman seems to think, he gives no answer of his own. He declares that premature attempts to determine their relation are harmful, more of a hindrance than a help, to research. In fact, Eddington refuses to commit himself on the ontological question posed in his introduction because he is convinced that it is empirical research, mobilising psychology and physiology as well as physics, which must give the answer. It is clear that he would have regarded Althusserianism as just such a premature and harmful attempt. But what would he have thought of OOO? We shall return to this question in the last part of this talk.

In his little text Harman explains very succinctly the difference between the two tables. But in opposition to Eddington’s supposed scientism, Harman affirms that these two tables are “equally unreal” (p6), that they are just fakes or simulacra (“utter shams”, 6). Assigning each table to one side of the gap that separates the famous “two cultures” dear to C.P.Snow (the culture of the humanities on one side, that of the sciences on the other), he finds that both are products of reductionism, which negates the reality of the table.

“The scientist reduces the table downward to tiny particles invisible to the eye; the humanist reduces it upward to a series of effects on people and other things” (6).

Refusing reductionism and its simulacra, Harman poses the existence of a third table (the “only real” table, 10) which serves as an emblem for a third culture to come whose paradigm could be taken from the arts which attempt to “establish objects deeper than the features through which they are announced, or allude to objects that cannot quite be made present” (THE THIRD TABLE, 14). Philosophy itself is to abandon its scientific pretentions in order to speak at last of the real world and its objects.

In WORD AND OBJECT Quine proposes a technique called “semantic ascent” to resolve certain problems in philosophy. He invites us to formulate our philosophical problems no longer in material terms, as questions concerning the components of the world (“objects”) but rather in formal terms, as questions concerning the correct use and the correct analysis of our linguistic expressions (“words”). The idea was to find common ground to discuss impartially the pretentions of rival points of view. Unfortunately, this method turned out to be useless to resolve most problems, as the important disputes concern just as much the terms to employ and their interpretation as soon as we take up an interesting philosophical problem.

Inversely, Graham Harman with his new ontology proposes a veritable semantic descent (or we could call it an “objectal descent”), to reverse the linguistic turn, and to replace it with an ontological turn. According to him the fundamental problems of ontology must be reformulated in terms of objects and their qualities. These objects are not the objects of our familiar world, let us recall that Harman declares that the familiar table is unreal, a simulacrum, an “utter sham”. The real object is a philosophical object, which “withdraws behind all its external effects” (10). We cannot touch the harmanian table (for we can never touch any real object) nor even know it.

“The real is something that cannot be known, but only loved” (12).

Thus Harman operates a reduction of the world to objects and their qualities which is intended to be in the first instance ontological and not epistemological (here Harman is mistaken, and the epistemological dimension is omnipresent in his work, but as the object of a denegation). This objectal reduction is difficult to argue for, and sometimes it is presented as a self-evident truth accessible to every person of good will and good sense, and Harman’s philosophy is trumpeted as a return to naiveté and concreteness, triumphing over post-structuralist pseudo-sophistication and its abstractions. But we shall see that this is not the case.

This reduction of the world to objects and their qualities amounts to a conversion of our philosophical vision that is disguised as a return to the real world of concrete objects:

“Instead of beginning with radical doubt, we start from naiveté. What philosophy shares with the lives of scientists,  bankers, and animals is that all are concerned with objects” (THE QUADRUPLE OBJECT, 5).

“Once we begin from naiveté rather than doubt, objects immediately take center stage” (idem, 7)

This “self-evidence” of the point of view of naïveté is in fact meticulously constructed and highly philosophically motivated. We must recall that Harman’s “objects” are not at all the objects of common sense (we cannot know them nor touch them). So the “naiveté” that Harman invokes here is not some primitive openness to the world (that would only be a variant of the “bucket theory of mind” and of knowledge, denounced by Karl Popper). This “naiveté” is a determinate point of view, a very particular perspective (the “naive point of view”, as the French translation so aptly calls it). Under cover of this word “naiveté”, Harman talks to us of a “naïf”  point of view, that is nevertheless an “objectal” point of view., that is to say not naïf at all but partisan. Harman deploys all his rhetorical resources to provoke in the reader the adoption of the objectal point of view as if it were self-evident. This “objectal conversion” is necessary, according to him, to at last get out of the tyranny of epistemology and the linguistic turn, and edify a new ontology, new foundation for a metaphysics capable of speaking of all objects. We have seen that this “self-evident” beginning implies both a conversion and a reduction.

We see the parallels and differences of object-oriented ontology in relation to Althusserianism. Both relegate the familiar object and the perceptual object to the status of social constructions. OOO goes even further and assigns the scientific object to the same status of simulacrum (“utter sham”): only philosophy can tell us the truth about objects. Both propose a meta-language, but OOO’s meta-language is so de-qualified that it is susceptible of different instanciations, and in fact no two members of the movement have the same concrete ontology. Finally, OOO spreads in making abundant, liberal (and here the word has all its import) use of the means that the internet makes available: blogs, discussion groups, facebook exchanges, twitter, podcasts, streaming.

I have spoken here principally of Graham Harman’s OOO because I do not believe that OOO exists in general and I also think that its apparent unity is a deceitful façade. There is no substance to the movement, it is rather a matter of agreement on a shared meta-language, ie on a certain terminology and set of themes, under the aegis of which many different positions can find shelter. I have spoken here almost exclusively of THE THIRD TABLE because Harman’s formulations change from book to book, and I find that in this little brochure Harman offers us his meta-language in a pure state. In his other books Harman, without noticing, slides constantly between a meta-ontological sense of object and a sense which corresponds to one possible instanciation of this meta-language, thus producing much conceptual confusion.

My major objection to Harman’s OOO is that it is a school philosophy dealing in generalities and abstractions far from the concrete joys and struggles of real human beings (“The world is filled primarily not with electrons or human praxis, but with ghostly objects withdrawing from all human and inhuman access”, THE THIRD TABLE, 12). Despite its promises,  Harman’s OOO does not bring us closer to the richness and complexity of the real world but in fact replaces the multiplicitous and variegated world with a set of bloodless and lifeless abstractions – his unknowable and untouchable, “ghostly”, objects. Not only are objects unknowable, but even whether something is a real object or not is unknowable: “we can never know for sure what is a real object and what isn’t”.

Yet Harman has legislated that his object is the only real object (cf. THE THIRD TABLE, where Harman calls his table, as compared to the table of everyday life and the scientist’s table, “the only real one”, 10, and “the onlyreal table”, 11. As for the everyday table and the scientific table: “both areequally unreal“, both are “utter shams”, 6.  “Whatever we capture, whatever we sit at or destroy is not the real table”, 12. And he accuses others of “reductionism”!). To say that the real object is unknowable (“the real is something that cannot be known”, p12) is an epistemological thesis. As is the claim that the object we know, the everyday or the scientific object, is unreal.

How can this help us in our lives? It is a doctrine of resignation and passivity: we cannot know the real object, the object we know is unreal, an “utter sham”, we cannot know what is or isn’t a real object. Harman’s objects do not withdraw, they transcend. They transcend our perception and our knowledge, they transcend all relations and interactions. As Harman reiterates, objects are deep (“objects are deeper than their appearance to the human mind but also deeper than their relations to one another”, 4, “the real table is a genuine reality deeper than any theoretical or practical encounter with it…deeper than any relations in which it might become involved”, 9-10). This “depth” is a key part of Harman’s ontology, which is not flat at all and is the negation of immanence. Rather, it is centered on this vertical dimension of depth and transcendence.

Harman practices a form of ontological critique which contains both relativist elements and dogmatic elements. At the level of explicit content Harman is freer , less dogmatic than Althusser, as he does not make science the queen of knowledge. Harman situates himself insistantly “after” the linguistic turn, after the so-called “epistemologies of access”, after deconstruction and post-structuralism. He considers that the time for construction has come, that we must construct a new philosophy by means of a return to the things themselves of the world – objects. But is this the case?

D) FEYERABEND AND THE HARMFULNESS OF THE ONTOLOGICAL TURN

1) EDDINGTON’S REPLY TO HARMAN: THE WAY OF RESEARCH

Feyerabend stands in opposition to this demand for a new construction, and wholeheartedly espouses the continued necessity of deconstruction. He rejects the idea that we need a new system or theoretical framework, arguing that in many cases a unified theoretical framework is just not necessary or even useful:

“a theoretical framework may not be needed (do I need a theoretical framework to get along with my neighbor?). Even a domain that uses theories may not need a theoretical framework (in periods of revolution theories are not used as frameworks but are broken into pieces which are then arranged this way and that way until something interesting seems to arise)” (Philosophy and Methodology of Military Intelligence, 13).

Further, not only is a unified framework often unnecessary, it can be a hindrance to our research and to the conduct of our lives: “frameworks always put undue constraints on any interesting activity” (ibid, 13). He emphasises that our ideas must be sufficiently complex to fit in and to cope with the complexity of our practices (11). More important than a new theoretical construction which only serves “to confuse people instead of helping them” we need ideas that have the complexity and the fluidity that come from close connection with concrete practice and with its “fruitful imprecision” (11). Lacking this connection, we get only school philosophies that “deceive people but do not help them”. They deceive people by replacing the concrete world with their own abstract construction “that gives some general and very mislead (sic!) outlines but never descends to details”. The result is a simplistic set of slogans and stereotypes that “is taken seriously only by people who have no original ideas and think that [such a school philosophy] might help them getting ideas”.

Applied to the the ontological turn, this means that an ontological system is useless, a hindrance to thought and action, whereas an ontology which is not crystallised into a system and principles, but which limits itself to an open set of rules of thumb and of free study of concrete cases is both acceptable and desirable. The detour through ontology is useless, because according to Feyerabend a more open and less technical approach is possible. In effect, Feyerabend indicates what Eddington could have replied to Harman: just like Althusserianism OOO must be considered a premature and harmful failure because it specifies in an apriori and dogmatic fashion what the elements of the world are. This failure is intrinsic to its transcendental approach: it is premature because it prejudges the paths and results of empirical research, it is harmful because it tends to exclude possible avenues of research and to close people’s minds, making them stupid.

Eddington’s position is in fact very complex. He gives a dramatised description of what amounts to the incommensurability of the world of physics and the familiar world of experience. This is implicit in the whole theme of the necessary “aloofness” (xv) that scientific conceptions must maintain with respect to familiar conceptions. He then goes on to pose the question of the relation, or “linkage”, between the two. Sometimes he seems to give primacy to the familiar world eg: “the whole scientific inquiry starts from the familiar world and in the end it must return to the familiar world” (xiii), and “Science aims at constructing a world which shall be symbolic of the world of commonplace experience” (xiii). Sometimes he gives primacy to the world of physics, and seems to declare that the familiar world is illusory, eg: “In removing our illusions we have removed the substance, for indeed we have seen that substance is one of the greatest of our illusions” (xvi), though he does attenuate this by adding: “Later perhaps we may inquire whether in our zeal to cut out all that is unreal we may not have used the knife too ruthlessly”. On the question of the relation between physics and philosophy he is no mere scientistic chauvinist. Indeed, he gives a certain primacy to the philosopher: “the scientist … has good and sufficient reasons for pursuing his investigations in the world of shadows and is content to leave to the philosopher the determination of its exact status in regard to reality” (xiv). But he considers that neither common sense nor philosophy must interfere with physical science’s ” freedom for autonomous development” (xv). His conclusion is that reflection on modern physics leads to ” a feeling of open-mindedness towards a wider significance transcending scientific measurement” (xvi) and warns against a priori closure: “After the physicist has quite finished his worldbuilding a linkage or identification is allowed; but premature attempts at linkage have been found to be entirely mischievous”.

As we can see, Graham Harman ‘s discussion of this text in THE THIRD TABLE makes a mess of Eddington’s position, treating him as advocating the scientistic primacy of the world of physics. Harman can then propose his own “solution”: the objects of both common sense and physics are “utter shams”, the real object is that of (Harman’s) philosophy. This is why I think that Harman’s OOO is a contemporary example of what Eddington calls “premature attempts at linkage” and that he finds “mischievous”, ie both failed and harmful.

2) A MACHIAN CRITIQUE OF OOO

My thesis is that much of OOO is a badly flawed epistemology masquerading as an ontology. An interesting confirmation of this thesis is the touting of Roy Bhaskar’s A REALIST THEORY OF SCIENCE. For those too young to remember: this book came out initially in 1975, after the major epistemological works by Popper, Kuhn, Lakatos and Feyerabend. It was an ontologising re-appropriation of their epistemological discoveries. It was hailed as a great contribution by the Anglophone Althusserians (I kid you not!), as it gave substance to their distinction between the theoretical object, produced by the theoretical practices of the sciences) and the real object. The Althusserians used Bhaskar to legitimate their posing of Althusserian Marxism and Lacanian psychoanalysis as sciences. Their universal critique of any philosophical view that did not square with theirs was to disqualify it as demonstrably belonging, sometimes in very roundabout and tortuous ways to the “problematic of the subject”. Does this begin to sound familiar? real object vs theoretical object, problematic of the subject = correlationism. These themes are not new, but go back to the dogmatic reaction of the 70s!). It is amusing to see that Bhaskar, who is a prime example of someone who invented an ontological correlate to epistemological insights, is now being used as the proponent of a non-correlationist “realist” position, to condemn those who supposedly give primacy to epistemology over ontology. The whole procedure is circular. That is to say, far from really asking the transcendental question of what must the world be like for science to be possible? (this is an ideological cover-up for the real historical stakes of Bhaskar’s intervention) Bhaskar proceeds to an ontologisation of insights and advances in epistemology, and so constrains future research with an a posteriori ontology projected backwards as if it were an a priori “neutral” precondition of science. So Harman’s supposed primacy of ontology is in fact based on his continual denegation of his de facto dependence on results imported from epistemology and on the dogmatic freezing and imposition of what is at best only a particular historical stage of scientific research and of epistemological reflection.

One of my biggest objections to OOO concerns the question of primacy, which remains moot in contemporary philosophy. As we have seen, Harman’s ontological turn gives primacy to (transcendental, meta-level) philosophy. Feyerabend articulates an Eddingtonian position, one that gives primacy neither to philosophy nor to physics, but defends the open-mindedness of empirical (though not necessarily scientific) research. I think this can be clarified by examining Feyerabend’s defense of the “way of the scientist” as against the “way of the philosopher”. Feyerabend’s references to Mach (and to Pauli) show that this “way of the scientist” is transversal, not respecting the boundaries between scientific disciplines nor those between the sciences and the humanities and the arts. So it is more properly called the “way of research”. Eddington too seems to espouse this Machian way out of the pitfalls of primacy.

Ernst Mach is often seen as a precursor of the logical positivists, an exponent of the idea that “things” are logical constructions built up out of the sensory qualities that compose the world, mere bundles of sensations. He would thus be a key example of what Graham Harman in THE QUADRUPLE OBJECT calls “overmining”. Feyerabend has shown in a number of essays that this vision of Mach’s “philosophy” (the quotation marks are necessary, according to Feyerabend “because Mach refused to be regarded as the proponent of a new “philosophy””, SCIENCE IN A FREE SOCIETY, p192) is erroneous, based on a misreading by the logical positivists that confounds his general ontology with one specific ontological hypothesis that Mach was at pains to describe as a provisional and research-relative specification of his more general proposal.

Following Ernst Mach, Feyerabend expounds the rudiments of what he calls a general methodology or a general cosmology (this ambiguity is important: Feyerabend, on general grounds but also after a close scrutiny of several important episodes in the history of physics, is proceeds as if there is no clear and sharp demarcation between ontology and epistemology, whereas Harman, without the slightest case study, is convinced of the existence of such a dichotomy). Feyerabend’s discussion of Mach’s ontology can be found in SCIENCE IN A FREE SOCIETY (NLB, 1978, p196-203) and in many other places, making it clear that it is one of the enduring inspirations of his work. Mach’s ontology can be summarised, according to Feyerabend, in two points:

i) the world is composed of elements an their relations

ii) the nature of these elements and their relations is to be specified by empirical research

One may note a resemblance with Graham Harman’s ontology, summarised in his “brief SR/OOO tutorial“:

i) Individual entities of various different scales (not just tiny quarks and electrons) are the ultimate stuff of the cosmos.

ii) These entities are never exhausted by their relations. Objects withdraw from relation.

The difference is illuminating. Whereas Mach leaves the nature of these elements open, allowing for the exploration of several hypotheses, Harman transcendentally reduces these possibilities to one: elements are objects (NB: this reduction of the possibilities to one, enshrined in a transcendental principle, is one of the reasons for calling Harman’s OOO an objectal reduction). Further, by allowing empirical research to specify the relations, Mach does not give himself an a priori principle of withdrawal: here again “withdrawal” is just one possibility among many. Another advantage of this ontology of unspecified elements is that it allows us to do research across disciplinary boundaries, including that between science and philosophy. Feyerabend talks of Mach’s ontology’s “disregard for distinctions between areas of research. Any method, any type of knowledge could enter the discussion of a particular problem” (p197). in my terminology Mach’s ontology is diachronic, evolving with and as part of empirical research. Harman’s ontology is synchronic, dictating and fixing transcendentally the elements of the world.

3) BEING IS MULTIPLE AND THUS POLITICAL

Feyerabend uses most often a dialogical method, although he was led to complain that this was often a one-sided dialogue. This was because many of the his philosophical reviewers were what he called “illiterate”, what I am in this talk calling “stupid”, that is to say instances of a dogmatic and decontextualised image of thought conjugated with a disindividuated academic professionalism. Of these failed dialogues Feyerabend writes (in SCIENCE IN A FREE SOCIETY, 10):

I publish them…because even a one-sided debate is more instructive than an essay and because I want to inform the wider public of the astounding illiteracy of some “professionals”

Fortunately, not all his dialogues were so one-sided. In his encounters with interlocutors Feyerabend tends to function like a zen master, trying to get people to change their attitude, to get them to “sense chaos” where they perceive “an orderly arrangement of well behaved things and processes” (cf. his LAST LETTER). A very instructive example of this can be seen in hiscorrespondence on military intelligence networks with Isaac Ben-Israel, over a 2 year period stretching from  September 1988 to October 1990.

Though Feyerabend mainly refers to the philosophy of science, after all it was his domain of specialisation for many long years, he gives sporadic indications that his remarks apply to all philosophy, to all “school philosophies”, and not just to epistemology and the philosophy of sciences. So it is possible to see in a very general way what Feyerabend’s ideas on ontology are in this epistolary dialogue which begins with considerations of school philosophy as a useless detour, comparing it unfavourably to a more “naive” unacademic critical approach (Feyerabend’s first letter, L1: p5-6), goes on to consider in a little more detail what an unacademic critical philosophy would look like (L2: p11-14)  proceeds to plead for the “non-demarcation” of the sciences and the arts-humanities” and for the need to see epistemology and ontology as parts of politics (L3: p21-23),, and culminates in L4-5 (p31-33) with a sketch of Feyerabend’s own views on ontology. This is an amazing document, as the dialogue form takes Feyerabend into a domain that he has not discussed before (intelligence networks) and permits a concise yet progressive exposition of his later ideas and of their “fruitful imprecision”.

Feyerabend tells us that ontological critique, or the detour through ontology, is unnecessary, because a more open and less technical approach is possible. He gives various figurations of that unacademic approach: the educated layman, discoverers and generals, certain Kenyan tribes, a lawyer interrogating experts, the Homeric Greek worldview, his own minimalist ontology. The advantages he cites of such an unacademic approach are:

1) ability to “work in partly closed surroundings” where there is a “flow of information in some direction, not in others” (p5)

2) action that is sufficiently complex to “fit in” to the complexity of our practices (p11) and of the real world (p12)

3) ability to work without a fixed “theoretical framework”,  to “work outside well-defined frames” (p22), to break up frameworks and to rearrange the pieces as the circumstances demand, to not be limited by the “undue constraints” inherent to any particular framework (p13)

4) ability to work not just outside the traditional prejudices of a particular domain (p5) but outside the boundaries between domains, such as the putative boundary between the arts and the sciences (p21)

5) an awareness of the political origins and consequences of seemingly apolitical academic subjects: ontology “without politics is incomplete and arbitrary” (p22).

But one could object that Feyerabend is a relativist and so that “empirical research” for him could give whatever result we want, because in his systemanything goes. In fact the best gloss of this polemical slogan is “anything could work (but mostly doesn’t)”. Feyerabend’s epistemological realism is supported by an ontological realism: “reality (or Being) has no well-defined structure but reacts in different ways to different approaches”. This is one reason why he sometimes refuses the label of “relativist”, because according to him “Relativism presupposes a fixed framework”. For Feyerabend, the transversality of communication between people belonging to apparently incommensurable structures shows that the notion of a frame of reference that is fixed and impermeable has only a limited applicability:

“people with different ways of life and different conceptions of reality can learn to communicate with each other, often even without a gestalt-switch, which means, as far as I am concerned, that the concepts they use and the perceptions they have are not nailed down but are ambiguous”.

Nevertheless, he distinguishes between Being, as ultimate reality, which is unknowable, and the multiple manifest realities which are produced by our interaction with it, and which are themselves knowable. Approach Being in one way, across decades of scientific experiment, and it produces elementary particles, approach it in another way and it produces the Homeric gods:

“I now distinguish between an ultimate reality, or Being. Being cannot be known, ever (I have arguments for that). What we do know are the various manifest realities, like the world of the Greek gods, modern cosmology etc. These are the results of an interaction between Being and one of its relatively independent parts” (32).

The difference with relativism is that there is no guarantee that the approach will work, Being is independent of us and must respond positively, which is often not the case.

Feyerabend draws the conclusion that the determination of what is real and what is a simulacrum cannot be the prerogative of an abstract ontology, and thus of the intellectuals who promulgate it. There is no fixed framework, the manifest realities are multiple, and Being is unknowable. Thus the determination of what is real depends on our choice in favour of one form of life or another, ie on a political decision. This leads to Feyerabend’s conclusion: ontology “without politics is incomplete and arbitrary”.

Inversely, Harman has repeated many times that ontology has nothing to do with politics. Seen through Feyerabend’s eyes Harman’s OOO is thus bothincomplete, because it is apolitical, and arbitrary, because it is a priori and monist, we have already said that, but also because it attributes to a little tribe of intellectuals the right to tell us what is real (Harman’s “ghostly objects withdrawing from all human and inhuman access”, THE THIRD TABLE, 12) and what is unreal (the simulacra of common sense, of the humanities, and of the sciences). It is also harmful because it is based on ghostly bloodless merely intelligible real objects that transcend any of the régimes and practices that give us qualitatively differentiated objects in any recognisable sense. Objects withdraw from the diverse truth-régimes (the sciences, the humanities, common sense, but also from religion and politics), i.e. etymologically they abstract themselves: real objects are abstractions, indeed they are abstraction itself. This is not a revolutionary new “weird” realism, this is regressive transcendent realism, cynically packaged as its opposite. I consider Harman’s OOO as a purified and consensualised (i.e. demarxised depoliticised descientised) version of Althusser’s ontology of the real object and of his anti-humanism, and as exhibiting the same defects as any other synchronic ontology.

E) CONCLUSION

The structure of my argument is very classical, and very abstract, as it remains wholly in the domain of philosophy, and even worse of first philosophy. I think that a consequent philosophical pluralism has its own dynamic that leads from a pluralism inside philosophy (eg Feyerabend’s methodological pluralism), to a pluralising of philosophy itself as an ontological realm and a cognitive régime claiming completeness and universality (eg Feyerabend’s Machian “way of research” and his later ontological pluralism: the target of “philosophy as a discourse that covers everything … an all-encompassing synthetic view of the world and what it all means”. Here I think comes the move of putting philosophy in relation to a non-philosophical outside (non-philosophical not meaning a negation but a wider practice, as in non-Euclidean geometries). François Laruelle has written on this sort of thing at length, but I don’t think he can claim exclusive ownership (nor even chronological priority)of this idea, nor is he even necessarily the best exemplar of the practice of such a non-philosophy. But at least his work is a gesture in the right direction. So a non-laruellian non-philosophy is a reasonable prolongation of pluralism. Feyerabend’s work is a good example of such a non-laruellian non-philosophy.

To conclude I would like to give some indications to show that these questions are, or can be, very practical. In his article NEW ONTOLOGIES Andrew Pickering presents the two ontologies that I discuss in terms of the contrast between the painters De Kooning and Mondrian. Mondrian’s paintings are examples of a synchronic approach, where the subect distances itself from the world in order to dominate it, according to a transcendent plan which imposes its abstract representations on a passive material. The painter foresees and imposes his order on everything, there is no room for surprises that emerge during the process of painting. The canvas does nothing, it is receptive rather than agentive, there is no exchange between the painter and his canvas, no dialogue.

On the other hand, De Kooning’s canvases participate themselves in the elaboration of the work. There is a continual back-and-forth between the painter and his canvas, “between the perception of emergent effects and the attempt to intensify them”. The De Kooningian approach is diachronic, it involves an immanent, concrete, incarnated, open process of engagement  in the world, whereas the Mondrianesque approach is synchronic and implies a transcendent , abstract, disincarnated, closed process of distanciation from the world. The Mondrianesque approach corresponds, according to Pickering, to Heideggerian “enframing”, while the De Kooningian approach practices aletheia, unveiling.

Pickering’s hope is that the diachronic practices which are still marginal in our society can come together and overflow or dissolve the dominant synchronic enframing. Pickering gives several concrete examples of diachronic practices, not only in art (De Kooning) but also in civil engineering (the ecological and adaptative management of a river) and also in psychiatry (anti-psychiatric experiments like Kingsley Hall, institutional psychotherapy like La Borde, favourising symmetric and non-hierarchical relations). He also talks of mathematics, music and architecture, to show in each case the concrete effects of both approaches. Thus we should keep in mind that even if the discussion in this paper is situated on the conceptual plane, the differences and disputes over ontology are inseparable from our concrete daily existence.

An inside look at U.S. think tank’s plans to undo environmental legislation (The Star)

The corporate-sponsored American Legislative Exchange Council works with lobbyists and legislators to derail climate change policies.

Occupy Phoenix protests the American Legislative Exchange Council, an organization that brings together large corporations and U.S. lawmakers to draft anti-environmental policies.

NICK OZA / THE REPUBLIC

Occupy Phoenix protests the American Legislative Exchange Council, an organization that brings together large corporations and U.S. lawmakers to draft anti-environmental policies.

 

Scientists are exaggerating the climate change crisis.

There’s no need to reduce carbon dioxide emissions because the benefits of warmer temperatures outweigh the costs.

Over-the-top environmental regulations are linked to such problems as suicide and drug abuse.

These aren’t the ramblings of a right-wing conspiracy theorist, but the opinions expressed at a midsummer retreat for U.S. state legislators held by a powerful U.S. think tank and sponsored by corporations as varied as AT&T and TransCanada, the company behind the controversial Keystone XL pipeline proposal.

Internal documents from this summer’s Dallas meeting of the American Legislative Exchange Council, leaked to a watchdog group, reveal several sessions casting doubt on the scientific evidence of climate change. They also reveal sessions focused on crafting policies that reduce rules for fossil fuel companies and create obstacles for the development of alternative forms of energy.

The meeting, hosted in Dallas from July 30 to Aug. 1, involved a mix of lobbyists, U.S. legislators and climate change contrarians, and was sponsored by more than 50 large corporations, including several that do business in Alberta’s oilsands.

One workshop had the goal of teaching politicians “how to think and talk about climate and energy issues” and provided them with guidance for fighting environmental policies and regulations.

“Legislators are just there as foot soldiers, really,” said Chris Taylor, a Democratic state representative from Wisconsin and a member of ALEC.

Taylor, who said she belongs to the group in order to keep people informed about what it’s doing, said research groups appear to be writing policies presented at the meeting on behalf of corporations that are trying to get rid of obstacles to profit.

“Legislators aren’t coming up with these ideas,” she said.

An ALEC spokeswoman, Molly Fuhs, said in an email to the Star that all of its meetings are meant to bring together members “to discuss and debate model solutions to the issues facing the states,” using principles of limited government, free markets and federalism.

All of the model policies, which must first be introduced by a legislator member, are voted on and approved by a national board made up of 23 state legislators, she added.

“This is to ensure ALEC model policies are driven by, and are reflective of, state legislators’ ideas and the issues facing the states,” she wrote.

The group, founded in 1973, says it has about 2,000 elected Democratic and Republican state legislators in its membership. Its non-partisan status as an educational organization allows it to give U.S. tax receipts to its donors.

With nine separate committees made up of corporate representatives and politicians, the council says it can contribute to as many as 1,000 different policies or laws in a single year. And on average, about 20 per cent of these become laws or policies in areas such as international trade, the environment or health care, it says.

“For more than forty years, ALEC has helped lobbyists from some of the biggest polluters on the planet meet privately with U.S. lawmakers to discuss and model legislation,” said Nick Surgey, research director at U.S. watchdog Center for Media and Democracy.

“ALEC is a big reason the U.S. is so far behind in taking significant action to tackle climate change.”

A separate session on climate change at the ALEC retreat, presented by another educational charity, featured several proposals to discourage development of renewable energy, to stop new American rules to reduce pollution from coal power plants, as well as a “model resolution” in support of Keystone XL, which is seeking approval from the Obama administration.

According to a conference agenda, obtained by the Center for Media and Democracy, this presentation was given by Joseph Bast, president of the Heartland Institute, a Chicago-based conservative think tank. Neither Bast, an author and publisher with an undergraduate degree in economics, nor the institute responded to requests for comment.

Slides from the presentation show that it also challenged established scientific evidence on climate change, while proposing to dismantle the U.S. Environmental Protection Agency.

Other internal ALEC records released by the watchdog show that it previously asked its elected members to publicly speak out in support of Keystone XL, providing them with “information” to include in submissions for the U.S. State Department, which is reviewing the TransCanada project.

“They lobby,” Taylor, the Wisconsin Democrat, said of ALEC. “They come up with model policies. They send emails to legislators. They urge people to support model policies. They send thank-yous when the model policies pass. My goal in going is to make sure it’s not stealth, to make sure people know where these policies come from. And these policies come from big corporations through ALEC.”

The Harper government has also participated in an ALEC event, sending a Canadian diplomat, Canada’s consul general in Dallas, Paula Caldwell St-Onge, to a 2011 conference in New Orleans to promote the Keystone XL pipeline, the oilsands and other fossil fuels. Speaking notes from her presentation don’t mention climate change.

Fuhs, ALEC’s spokeswoman, confirmed that several multinational corporations were among those to sponsor the Dallas conference, including telecommunications giant AT&T, pharmaceutical companies Pfizer and Bayer and energy companies such as Chevron, Devon, Exxon Mobil and TransCanada.

But she stopped responding to questions from the Star after being asked about the internal documents circulated at the meeting and obtained by the watchdog group.

Most of the companies contacted by the Star confirmed they had sponsored the event, explaining that this didn’t necessarily mean they endorsed all of ALEC’s proposed policies.

Alberta-based TransCanada, which sponsored an “Ice Cream Social” event at the ALEC meetings in each of the past two years, downplayed its role.

“I cannot honestly speak to whether or not someone who was a consultant for our company was at the event — because we are not their only client — but no one was directed to be at this event to present views on behalf of TransCanada,” said TransCanada spokesman Shawn Howard. “I can’t be any clearer than that.”

Howard, who said the company’s contributions to ALEC weren’t considered to be charitable donations, said the sponsorship doesn’t mean TransCanada agrees with the organization’s policies.

“Reasonable people wouldn’t expect us to only go to or support things that are a perfect match for our own company’s views and values,” Howard said, noting TransCanada has a climate change policy that includes billions of dollars of investments in renewable energy.

“Sometimes you have to speak to people with different viewpoints to develop better public policy and decisions — that’s just common sense,” Howard said.

A spokesman for ExxonMobil told the Star the company didn’t want to comment about its sponsorship of ALEC, saying that it wasn’t a member of the organization. ALEC’s website lists representatives from 17 organizations on its “private enterprise advisory council” including ExxonMobil, AT&T, Pfizer, as well as Peabody Energy, the largest private-sector coal company in the world.

ALEC declined to explain the role of this “advisory council.”

A spokesman from Devon Energy, Tim Hartley, confirmed that it was “one of the many sponsors” of the Dallas meeting, explaining that the company “generally favours the principles of free markets and limited government that animate ALEC.” But he said he couldn’t discuss specific public policy issues.

“We interact with a variety of stakeholder groups in the course of our business, and we embrace our responsibility to participate in the free and open marketplace of ideas,” said Hartley.

Although she is often critical of ALEC, Taylor, who joined the organization as a legislative member a few years ago, said she doesn’t expect to be kicked out since it is trying to promote its bipartisan nature to preserve its charitable status.

She said energy was a major theme at the Dallas conference, driven by some large corporations, with one corporate representative from Peabody Energy urging the conference to help spark a “political tsunami” against new U.S. EPA regulations proposed to slash pollution from coal power plants.

Peabody Energy didn’t respond to a request for comment.

Surgey, from the Center for Media and Democracy, said one of his biggest concerns about ALEC is its secrecy.

“We have many of our state elected officials going on to these conferences, and yet we’re not allowed to know who they meet with,” said Surgey. “We just know that it’s a very large number of lobbyists from big multinational corporations but ALEC refuses to tell us who’s there.”

ALEC has also sponsored a pair of trips for U.S. politicians to the Alberta oilsands — described as an “oilsands academy” — arranging meetings for the politicians with representatives from TransCanada and Devon Energy, as well as one environmental group, the Pembina Institute, in October 2012.

TransCanada said it doesn’t organize or fund these types of visits, but it assists by freeing up staff to explain operations at facilities.

Sandi Walker, an Alberta government spokeswoman from the provincial department of international and intergovernmental relations, said it hosted 54 trips to the oilsands in 2012, including the fall visit co-ordinated by ALEC as part of ongoing efforts to inform legislators and officials about the industry with “fact-based information” to allow key decision-makers to make informed decisions about energy. Each trip typically cost about $3,000, she said.

She said an ALEC representative had contacted Alberta to set up the meeting, explaining that the province maintains relations with a variety of stakeholders and organizations in the U.S.

Walker said the province is committed to being a leader in greenhouse gas reduction technology by renewing its climate change strategy so that it can effectively reduce emissions at the source, noting it has already implemented a price on carbon emissions for industry.

While TransCanada’s pipeline proposal has popped up on the agenda at multiple ALEC events in recent years, Taylor said that the company’s latest “ice cream social” reminded her of what happened last year when it hosted a similar event.

The ice cream started melting, and in a crowd of skeptics, she joked that she thought this might be accepted as evidence of global warming.

Brain circuit differences reflect divisions in social status (Science Daily)

Date: September 2, 2014

Source: University of Oxford

Summary: Life at opposite ends of primate social hierarchies is linked to specific brain networks, research has shown. The more dominant you are, the bigger some brain regions are. If your social position is more subordinate, other brain regions are bigger.

 

Group of young barbary macaques (stock image). The research determined the position of 25 macaque monkeys in their social hierarchy and then analyzed non-invasive scans of their brains that had been collected as part of other ongoing University research programs. The findings show that brain regions in one neural circuit are larger in more dominant animals. The regions composing this circuit are the amygdala, raphe nucleus and hypothalamus. Credit: © scphoto48 / Fotolia

Life at opposite ends of primate social hierarchies is linked to specific brain networks, a new Oxford University study has shown.

The importance of social rank is something we all learn at an early age. In non-human primates, social dominance influences access to food and mates. In humans, social hierarchies influence our performance everywhere from school to the workplace and have a direct influence on our well-being and mental health. Life on the lowest rung can be stressful, but life at the top also requires careful acts of balancing and coalition forming. However, we know very little about the relationship between these social ranks and brain function.

The new research, conducted at the University of Oxford, reveals differences between individual primate’s brains which depend on the their social status. The more dominant you are, the bigger some brain regions are. If your social position is more subordinate, other brain regions are bigger. Additionally, the way the brain regions interact with each other is also associated with social status. The pattern of results suggests that successful behaviour at each end of the social scale makes specialised demands of the brain.

The research, led by Dr MaryAnn Noonan of the Decision and Action Laboratory at the University of Oxford, determined the position of 25 macaque monkeys in their social hierarchy and then analysed non-invasive scans of their brains that had been collected as part of other ongoing University research programs. The findings, publishing September 2 in the open access journal PLOS Biology, show that brain regions in one neural circuit are larger in more dominant animals. The regions composing this circuit are the amygdala, raphe nucleus and hypothalamus. Previous research has shown that the amygdala is involved in learning, and processing social and emotional information. The raphe nucleus and hypothalamus are involved in controlling neurotransmitters and neurohormones, such as serotonin and oxytocin. The MRI scans also revealed that another circuit of brain regions, which collectively can be called the striatum, were found to be larger in more subordinate animals. The striatum is known to play a complex but important role in learning the value of our choices and actions.

The study also reports that the brain’s activity, not just its structure, varies with position in the social hierarchy. The researchers found that the strength with which activity in some of these areas was coupled together was also related to social status. Collectively, these results mean that social status is not only reflected in the brain’s hardware, it is also related to differences in the brain’s software, or communication patterns.

Finally, the size of another set of brain regions correlated not only with social status but also with the size of the animal’s social group. The macaque groups ranged in size between one and seven. The research showed that grey matter in regions involved in social cognition, such as the mid-superior temporal sulcus and rostral prefrontal cortex, correlated with both group size and social status. Previous research has shown that these regions are important for a variety of social behaviours, such as interpreting facial expressions or physical gestures, understanding the intentions of others and predicting their behaviour.

“This finding may reflect the fact that social status in macaques depends not only on the outcome of competitive social interactions but on social bonds formed that promote coalitions,” says Matthew Rushworth, the head of the Decision and Action Laboratory in Oxford. “The correlation with social group size and social status suggests this set of brain regions may coordinate behaviour that bridges these two social variables.”

The results suggest that just as animals assign value to environmental stimuli they may also assign values to themselves — ‘self-values’. Social rank is likely to be an important determinant of such self-values. We already know that some of the brain regions identified in the current study track the value of objects in our environment and so may also play a key role in monitoring longer-term values associated with an individual’s status.

The reasons behind the identified brain differences remain unclear, particularly whether they are present at birth or result from social differences. Dr Noonan said: “One possibility is that the demands of a life in a particular social position use certain brain regions more frequently and as a result those areas expand to step up to the task. Alternatively, it is possible that people born with brains organised in a particular way tend towards certain social positions. In all likelihood, both of these mechanisms will work together to produce behaviour appropriate for the social context.”

Social status also changes over time and in different contexts. Dr Noonan added: “While we might be top-dog in one circle of friends, at work we might be more of a social climber. The fluidity of our social position and how our brains adapt our behavior to succeed in each context is the next exciting direction for this area of research.”

 

Journal Reference:

  1. MaryAnn P. Noonan, Jerome Sallet, Rogier B. Mars, Franz X. Neubert, Jill X. O’Reilly, Jesper L. Andersson, Anna S. Mitchell, Andrew H. Bell, Karla L. Miller, Matthew F. S. Rushworth. A Neural Circuit Covarying with Social Hierarchy in Macaques. PLoS Biology, 2014; 12 (9): e1001940 DOI:10.1371/journal.pbio.1001940

Time flies: Breakthrough study identifies genetic link between circadian clock and seasonal timing (Science Daily)

Date: September 4, 2014

Source: University of Leicester

Summary: New insights into day-length measurement in flies have been uncovered by researchers. The study has corroborated previous observations that flies developed under short days become significantly more cold-resistant compared with flies raised in long-days, suggesting that this response can be used to study seasonal photoperiodic timing. Photoperiodism is the physiological reaction of organisms to the length of day or night, occurring in both plants and animals.

Sunrise. Photoperiodism is the physiological reaction of organisms to the length of day or night, occurring in both plants and animals. Credit: © tomaspic / Fotolia

Researchers from the University of Leicester have for the first time provided experimental evidence for a genetic link between two major timing mechanisms, the circadian clock and the seasonal timer.

New research from the Tauber laboratory at the University of Leicester, which will be published in the academic journal PLOS Genetics on 4 September, has corroborated previous observations that flies developed under short days become significantly more cold-resistant compared with flies raised in long-days, suggesting that this response can be used to study seasonal photoperiodic timing.

Photoperiodism is the physiological reaction of organisms to the length of day or night, occurring in both plants and animals.

Dr Mirko Pegoraro, a member of the team, explained: “The ability to tell the difference between a long and short day is essential for accurate seasonal timing, as the photoperiod changes regularly and predictably along the year.”

The difference in cold response can be easily seen using the chill-coma recovery assay — in which flies exposed to freezing temperatures enter a reversible narcosis. The recovery time from this narcosis reflects how cold-adaptive the flies are.

The team has demonstrated that this response is largely regulated by the photoperiod — for example, flies exposed to short days (winter-like) during development exhibit shorter recovery times (more cold adapted) during the narcosis test.

Dr Eran Tauber from the University of Leicester’s Department of Genetics explained: “Seasonal timing is a key process for survival for most organisms, especially in regions with a mild climate. In a broad range of species, from plants to mammals, the annual change in day-length is monitored by the so-called ‘photoperiodic clock’.

“Many insects for example, including numerous agricultural pests, detect the shortening of the day during the autumn and switch to diapause — a developmental arrest — which allows them to survive the winter.

“Despite intensive study of the photoperiodic clock for the last 80 years, however, the underlying molecular mechanism is still largely unknown. This is in marked contrast to our understanding of the circadian clock that regulates daily rhythms.”

The team has tested mutant strains in which the circadian clock is disrupted and has found that the photoperiodic clock was also disrupted, providing the first experimental evidence for the role of the circadian clock in seasonal photoperiodic timing in flies.

The new research is based on an automated system, allowing the monitoring of hundreds of flies, which paves the way for new insights into our understanding of the genes involved in the photoperiodic response and seasonal timing.

Professor Melanie Welham, Executive Director for Science, at the Biotechnology and Biological Sciences Research Council (BBSRC), said: “This study shows an interesting genetic link between the circadian clock and the seasonal timer. The ubiquity of these clocks across so many species makes this an important discovery which will lead to a better understanding of these essential processes.”

 

Journal Reference:

  1. Mirko Pegoraro, Joao S. Gesto, Charalambos P. Kyriacou, Eran Tauber. Role for Circadian Clock Genes in Seasonal Timing: Testing the Bünning Hypothesis.PLOS Genetics, September 2014 DOI: 10.1371/journal.pgen.1004603

Clues to trapping carbon dioxide in rock: Calcium carbonate takes multiple, simultaneous roads to different minerals (Science Daily)

Date: September 4, 2014

Source: Pacific Northwest National Laboratory

Summary: Researchers used a powerful microscope that allows them to see the birth of calcium carbonate crystals in real time, giving them a peek at how different calcium carbonate crystals form.


An aragonite crystal — with its characteristic “sheaf of wheat” look — consumed a particle of amorphous calcium carbonate as it formed. Credit: Nielsen et al. 2014/Science

One of the most important molecules on earth, calcium carbonate crystallizes into chalk, shells and minerals the world over. In a study led by the Department of Energy’s Pacific Northwest National Laboratory, researchers used a powerful microscope that allows them to see the birth of crystals in real time, giving them a peek at how different calcium carbonate crystals form, they report in September 5’s issue of Science.

The results might help scientists understand how to lock carbon dioxide out of the atmosphere as well as how to better reconstruct ancient climates.

“Carbonates are most important for what they represent, interactions between biology and Earth,” said lead researcher James De Yoreo, a materials scientist at PNNL.. “For a decade, we’ve been studying the formation pathways of carbonates using high-powered microscopes, but we hadn’t had the tools to watch the crystals form in real time. Now we know the pathways are far more complicated than envisioned in the models established in the twentieth century.”

Earth’s Reserve

Calcium carbonate is the largest reservoir of carbon on the planet. It is found in rocks the world over, shells of both land- and water-dwelling creatures, and pearls, coral, marble and limestone. When carbon resides within calcium carbonate, it is not hanging out in the atmosphere as carbon dioxide, warming the world. Understanding how calcium carbonate turns into various minerals could help scientists control its formation to keep carbon dioxide from getting into the atmosphere.

Calcium carbonate deposits also contain a record of Earth’s history. Researchers reconstructing ancient climates delve into the mineral for a record of temperature and atmospheric composition, environmental conditions and the state of the ocean at the time those minerals formed. A better understanding of its formation pathways will likely provide insights into those events.

To get a handle on mineral formation, researchers at PNNL, the University of California, Berkeley, and Lawrence Berkeley National Laboratory examined the earliest step to becoming a mineral, called nucleation. In nucleation, molecules assemble into a tiny crystal that then grows with great speed. Nucleation has been difficult to study because it happens suddenly and unpredictably, so the scientists needed a microscope that could watch the process in real time.

Come to Order

In the 20th century, researchers established a theory that crystals formed in an orderly fashion. Once the ordered nucleus formed, more molecules added to the crystal, growing the mineral but not changing its structure. Recently, however, scientists have wondered if the process might be more complicated, with other things contributing to mineral formation. For example, in previous experiments they’ve seen forms of calcium carbonate that appear to be dense liquids that could be sources for minerals.

Researchers have also wondered if calcite forms from less stable varieties or directly from calcium and carbonate dissolved in the liquid. Aragonite and vaterite are calcium carbonate minerals with slightly different crystal architectures than calcite and could represent a step in calcite’s formation. The fourth form called amorphous calcium carbonate — or ACC, which could be liquid or solid, might also be a reservoir for sprouting minerals.

To find out, the team created a miniature lab under a transmission electron microscope at the Molecular Foundry, a DOE Office of Science User Facility at LBNL. In this miniature lab, they mixed sodium bicarbonate (used to make club soda) and calcium chloride (similar to table salt) in water. At high enough concentrations, crystals grew. Videos of nucleating and growing crystals recorded what happened [URLs to come].

Morphing Minerals

The videos revealed that mineral growth took many pathways. Some crystals formed through a two-step process. For example, droplet-like particles of ACC formed, then crystals of aragonite or vaterite appeared on the surface of the droplets. As the new crystals formed, they consumed the calcium carbonate within the drop on which they nucleated.

Other crystals formed directly from the solution, appearing by themselves far away from any ACC particles. Multiple forms often nucleated in a single experiment — at least one calcite crystal formed on top of an aragonite crystal while vaterite crystals grew nearby.

What the team didn’t see in and among the many options, however, was calcite forming from ACC even though researchers widely expect it to happen. Whether that means it never does, De Yoreo can’t say for certain. But after looking at hundreds of nucleation events, he said it is a very unlikely event.

“This is the first time we have directly visualized the formation process,” said De Yoreo. “We observed many pathways happening simultaneously. And they happened randomly. We were never able to predict what was going to come up next. In order to control the process, we’d need to introduce some kind of template that can direct which crystal forms and where.”

In future work, De Yoreo and colleagues plan to investigate how living organisms control the nucleation process to build their shells and pearls. Biological organisms keep a store of mineral components in their cells and have evolved ways to make nucleation happen when and where needed. The team is curious to know how they use cellular molecules to achieve this control.

This work was supported by the Department of Energy Office of Science.

 

Journal Reference:

  1. Michael H. Nielsen, Shaul Aloni, and James J. De Yoreo. In Situ TEM Imaging of CaCO3 Nucleation Reveals Coexistence of Direct and Indirect Pathways.Science, September 5, 2014 DOI: 10.1126/science.1254051

Organização Mundial de Meteorologia lança série de vídeos sobre mudança climática (Fapesp)

Objetivo é sensibilizar sobre os impactos locais do aquecimento global; primeiro episódio prevê o tempo no Brasil no ano de 2050

05/09/2014

Agência FAPESP – A Organização Mundial de Meteorologia (OMM) lançou os primeiros episódios de uma série de vídeos com previsões do tempo projetadas para o ano de 2050. A primeira edição traz a previsão para o dia 8 de junho daquele ano no Brasil, apresentada por Claudia Celli, da RPC-TV, afiliada da TV Globo no Paraná.

O objetivo da iniciativa é sensibilizar as pessoas sobre os impactos locais das mudanças climáticas globais. Os vídeos trazem sempre apresentadores de televisão conhecidos em um determinado país – e os cenários são compatíveis com os projetados no quinto relatório do Painel Intergovernamental de Mudanças Climáticas (IPCC).

No caso do Brasil, a previsão é de muita chuva no sul do país e no oeste da Amazônia. A expectativa é que os níveis de chuva para o mês sejam superados em apenas alguns dias, aumentando o risco de inundações e deslizamentos. Para o Nordeste e o leste da Amazônia, a previsão é de seca.

O lançamento dos vídeos pela OMM ocorre em apoio ao pedido do secretário-geral da Organização das Nações Unidas (ONU), Ban Ki-moon, para que governos, empresários e líderes da sociedade civil concordem em agir para lidar com a mudança climática durante a cúpula climática da ONU, marcada para 23 de setembro, a fim de evitar que se concretizem as previsões dos piores cenários.

“A mudança climática está afetando o tempo em todo lugar. Isso torna o clima mais extremo e modifica os padrões estabelecidos. Isso significa mais desastres; mais incerteza”, diz Ban Ki-moon em uma mensagem no vídeo.

A edição sobre o Brasil traz ainda uma entrevista de Celli com José Marengo, pesquisador do Instituto Nacional de Pesquisas Espaciais (Inpe) e membro do Programa FAPESP de Pesquisa sobre Mudanças Climáticas Globais (PFPMCG).

“Nas regiões tropicais, em basicamente todo o Brasil, os aumentos de temperatura no fim do século poderão ultrapassar os 4º C. Em termos de chuva, o padrão muda um pouco. As previsões mostram reduções de chuva no leste da Amazônia e na região Nordeste e aumento de chuva no oeste da Amazônia e no extremo sul do Brasil”, afirma Marengo no vídeo.

“A resposta tem que ser imediata [às mudanças climáticas globais]. Nas próximas décadas tem que se chegar a um acordo internacional, tipo o Protocolo de Kyoto, para reduzir as emissões de gases de efeito estufa, porque reduzir essas emissões é a única forma de poder reduzir o aquecimento e reduzir os impactos à população”, acrescenta o pesquisador.

O vídeo com a previsão para o Japão também já está no ar. Nesta sexta-feira (05/09), será divulgado o boletim meteorológico para a Dinamarca.

Os outros países que terão vídeos sobre a previsão do tempo em 2050 são: Zâmbia, Burkina Faso, Estados Unidos, Bulgária, Filipinas, Bélgica, África do Sul, Islândia, Alemanha e Tanzânia.

Os vídeos podem ser assistidos em www.youtube.com/user/wmovideomaster www.wmo.int/media/climatechangeimpact.html

Eunice Nodari, doutora em história ambiental:‘Não podemos controlar a chuva. Os desastres, sim’ (O Globo)

Professora gaúcha foi uma das palestrantes do encontro que reuniu, no mês passado, pesquisadores dos cinco países que compõem o Brics

POR FÁTIMA FREITAS


Eunice Nodari atesta que erros ambientais do passado continuam a acontecer, aponta caminhos para mudança e fala sobre a história ambiental de diferentes países
Foto: Fabio Seixo / Agência O Globo
Eunice Nodari atesta que erros ambientais do passado continuam a acontecer, aponta caminhos para mudança e fala sobre a história ambiental de diferentes países – Fabio Seixo / Agência O Globo

“Nasci em Sarandi, Rio Grande do Sul. Meu pai era pequeno comerciante e queria que eu fosse ‘alguém na vida’. Bom, consegui ser a primeira a ter curso superior na família… Nos anos 1980, me mudei para Santa Catarina. Tenho 60 anos, 3 filhos e 2 netos e sou casada com um professor de genética vegetal”

Conte algo que não sei.

A história ambiental no Brasil é um campo novo. Começou a ganhar força na década de 1990, com forte influência dos Estados Unidos. Com isso, em 2001, enveredei minha carreira para pesquisas nessa área. Iniciamos com projetos sobre a história do desmatamento das florestas do Sul do Brasil, e avançamos para outros temas prementes relacionados ao meio ambiente. Logo conseguimos criar uma linha de pesquisa em Migrações e História Ambiental, no Programa de Pós-Graduação da Universidade Federal de Santa Catarina (UFSC). Foi um trabalho pioneiro que vem dando ótimos resultados e, ainda, é um estímulo para outras universidades.

Além da UFSC, quais são as grandes referências em história ambiental no Brasil?

O destaque deve ser dado ao Programa de Pós-Graduação em História Social da UFRJ, da UNB e a UFMG. Juntas, essas universidades têm 64 teses de doutorado. É importante ressaltar que os meus ex-orientandos, hoje doutores, já são professores de universidades em diferentes estados. Nelas, eles também estão criando os seus grupos desta disciplina, aumentando, assim, a rede.

A senhora foi palestrante do Simpósio Diálogo em História Ambiental: Brics. O que os países que integram o grupo têm em comum nas questões ambientais?

O Brics reuniu pesquisadores ambientais dos países que o compõem com o objetivo de discutir formas de serem realizadas pesquisas em conjunto. Foi um evento muito importante, inédito na área de história. Foram debatidas similaridades e diferenças. Sem dúvida, as enchentes são eventos recorrentes na maioria dos cinco países. No caso do Brasil, o Rio de Janeiro e Blumenau, por exemplo, sofrem com as cheias. Uma das deficiências observadas nas pesquisas realizadas por mim e por Lise Sedrez deixa claro que as políticas públicas investem muito pouco na prevenção dos problemas que surgem com os temporais anualmente. Uma coisa é certa: não podemos controlar a chuva, mas os desastres, sim.

E, neste caso, qual o papel do historiador ambiental?

É analisar como os desastres ambientais, que são os que têm a intervenção do homem, estão diretamente relacionados com as problemáticas sociais, econômicas, culturais e, mesmo, políticas, apontando os caminhos para evitar que esses processos se repitam.

Erros ambientais do passado ainda são frequentes?

Infelizmente, as lições herdadas do passado não estão sendo devidamente observadas, pois os mesmos erros continuam sendo praticados. Cometer infrações básicas, como não respeitar as áreas de matas ciliares, importantes para a contenção das cheias e a qualidade da água, significa falta de respeito não somente ao meio ambiente, mas também à vida humana e dos demais habitantes do planeta.

A violência ambiental é resultado da falta de legislação?

No meu entender, as violências socioambientais mais preocupantes são as silenciosas, aquelas que acontecem cotidianamente e que não são resolvidas. Por exemplo, a falta de saneamento básico para parte da população. Não podemos atribuir à falta de legislação o descontrole na degradação, pois a própria Constituição de 1988 inclui os direitos relacionados ao meio ambiente.

 

Your Brain on Metaphors (The Chronicle of Higher Education)

September 1, 2014

Neuroscientists test the theory that your body shapes your ideas

Your Brain  on Metaphors 1

Chronicle Review illustration by Scott Seymour

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same.
The syntax is identical.
Does the brain notice, or care,
that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By,the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness.

Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are “visible” and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it isimpossible—not just difficult, but impossible—for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.

Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied—that human concepts are shaped by the physical features of human brains and bodies. “Our physiology provides the concepts for our philosophy,” Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: “The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience.”

Modern brain-scanning technologies make it possible to test such claims empirically. “That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand,” says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.

Neuroscientists agree on what happens with literal sentences like “The player kicked the ball.” The brain reacts as if it were carrying out the described actions. This is called “simulation.” Take the sentence “Harry picked up the glass.” “If you can’t imagine picking up a glass or seeing someone picking up a glass,” Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, “then you can’t understand that sentence.” Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.

But what about metaphorical sentences like “The patient kicked the habit”? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as “stopped”? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.

The evidence says it does. “When you read action-related metaphors,” says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, “you have activation of the motor area of the brain.” In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with “The villain kicked the bucket” (“The villain died”)? What about “The students toed the line” (“The students conformed to the rules”)? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.

The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as “biting off more than you can chew” did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.

In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. “Imaging studies of embodiment in figurative language have not compared idioms and metaphors,” they wrote in a report. “Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors.” Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. “The field is new. The methods need to stabilize,” she says. “There are many different kinds of figurative language, and they may be importantly different from one another.”

Not only that, the nitty-gritty differences of procedure may be important. “All of these studies are carried out with different kinds of linguistic stimuli with different procedures,” Cuccio says. “So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference.”

To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: “When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief.”

But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. “They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics.”

That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take “kick the bucket.” Lakoff offers a theory of what it means using a scene from Young Frankenstein. “Mel Brooks is there and they’ve got the patient dying,” he says. “The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor—that you’re full of life, and life is a fluid. You kick the bucket, and it goes over.”

That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. “You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ ” Lakoff says. “Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything.”

Similarly, Lakoff says, when linguists ask people to write down the idiom “toe the line,” half of them write “tow the line.” That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex—in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process “toe the line.” In that case, averaging subjects together would be misleading.

Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. “How many neurons are there in one pixel or one voxel?” he says. “About 125,000. They’re one point in the picture.” MRI lacks the necessary temporal resolution, too. “What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel.” What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.

Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.

However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, “the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened.” Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, “For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.”

But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. “If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system,” she says. “I don’t know why we have to replicate our physical limitations in other systems.”

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.

Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).