Arquivo da tag: Matemática

New math and quantum mechanics: Fluid mechanics suggests alternative to quantum orthodoxy (Science Daily)

Date: September 12, 2014

Source: Massachusetts Institute of Technology

Summary: The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed. But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics. Now a professor of applied mathematics believes that pilot-wave theory deserves a second look.


Close-ups of an experiment conducted by John Bush and his student Daniel Harris, in which a bouncing droplet of fluid was propelled across a fluid bath by waves it generated. Credit: Dan Harris

The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed.

But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics.

John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.

Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.

“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”

Tracking trajectories

Bush believes that the Copenhagen interpretation sidesteps the technical challenge of calculating particles’ trajectories by denying that they exist. “The key question is whether a real quantum dynamics, of the general form suggested by de Broglie and the walking drops, might underlie quantum statistics,” he says. “While undoubtedly complex, it would replace the philosophical vagaries of quantum mechanics with a concrete dynamical theory.”

Last year, Bush and one of his students — Jan Molacek, now at the Max Planck Institute for Dynamics and Self-Organization — did for their system what the quantum pioneers couldn’t do for theirs: They derived an equation relating the dynamics of the pilot waves to the particles’ trajectories.

In their work, Bush and Molacek had two advantages over the quantum pioneers, Bush says. First, in the fluidic system, both the bouncing droplet and its guiding wave are plainly visible. If the droplet passes through a slit in a barrier — as it does in the re-creation of a canonical quantum experiment — the researchers can accurately determine its location. The only way to perform a measurement on an atomic-scale particle is to strike it with another particle, which changes its velocity.

The second advantage is the relatively recent development of chaos theory. Pioneered by MIT’s Edward Lorenz in the 1960s, chaos theory holds that many macroscopic physical systems are so sensitive to initial conditions that, even though they can be described by a deterministic theory, they evolve in unpredictable ways. A weather-system model, for instance, might yield entirely different results if the wind speed at a particular location at a particular time is 10.01 mph or 10.02 mph.

The fluidic pilot-wave system is also chaotic. It’s impossible to measure a bouncing droplet’s position accurately enough to predict its trajectory very far into the future. But in a recent series of papers, Bush, MIT professor of applied mathematics Ruben Rosales, and graduate students Anand Oza and Dan Harris applied their pilot-wave theory to show how chaotic pilot-wave dynamics leads to the quantumlike statistics observed in their experiments.

What’s real?

In a review article appearing in the Annual Review of Fluid Mechanics, Bush explores the connection between Couder’s fluidic system and the quantum pilot-wave theories proposed by de Broglie and others.

The Copenhagen interpretation is essentially the assertion that in the quantum realm, there is no description deeper than the statistical one. When a measurement is made on a quantum particle, and the wave form collapses, the determinate state that the particle assumes is totally random. According to the Copenhagen interpretation, the statistics don’t just describe the reality; they are the reality.

But despite the ascendancy of the Copenhagen interpretation, the intuition that physical objects, no matter how small, can be in only one location at a time has been difficult for physicists to shake. Albert Einstein, who famously doubted that God plays dice with the universe, worked for a time on what he called a “ghost wave” theory of quantum mechanics, thought to be an elaboration of de Broglie’s theory. In his 1976 Nobel Prize lecture, Murray Gell-Mann declared that Niels Bohr, the chief exponent of the Copenhagen interpretation, “brainwashed an entire generation of physicists into believing that the problem had been solved.” John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all “hidden-variable” accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. “It is a great mystery to me that it was so soundly ignored,” he said.

Then there’s David Griffiths, a physicist whose “Introduction to Quantum Mechanics” is standard in the field. In that book’s afterword, Griffiths says that the Copenhagen interpretation “has stood the test of time and emerged unscathed from every experimental challenge.” Nonetheless, he concludes, “It is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

“The work of Yves Couder and the related work of John Bush … provides the possibility of understanding previously incomprehensible quantum phenomena, involving ‘wave-particle duality,’ in purely classical terms,” says Keith Moffatt, a professor emeritus of mathematical physics at Cambridge University. “I think the work is brilliant, one of the most exciting developments in fluid mechanics of the current century.”

Journal Reference:

  1. John W.M. Bush. Pilot-Wave Hydrodynamics. Annual Review of Fluid Mechanics, 2014 DOI: 10.1146/annurev-fluid-010814-014506

Mais do que fórmulas e operações, matemática é arte, dizem pesquisadores (Agência Brasil)

segunda-feira, 1 de setembro de 2014

Essa é a avaliação de estudantes do Instituto Nacional de Matemática Pura e Aplicada

Nem só de aplicar fórmulas vive a matemática. O arranjo dos números exige criatividade. Mostrar aos estudantes do ensino fundamental e médio a “verdadeira beleza artística da matemática” é a saída para despertar o interesse dos jovens e melhorar o ensino da tão temida disciplina. Essa é a avaliação de estudantes do Instituto Nacional de Matemática Pura e Aplicada (Impa).

Para Victor Bitarães, 19 anos, mineiro de Contagem, um caminho é a Olimpíada Brasileira de Matemática das Escolas Públicas (Obmep), que rendeu a ele uma menção honrosa, uma medalha de bronze e cinco de ouro, além de participação nos programas de Iniciação Científica (PICs), duas competições internacionais e a atual bolsa de mestrado, antes de entrar para a graduação.

“O ensino de matemática é bem ruim, não é uma opinião só de quem esteve dentro de sala de aula, isso é confirmado pelos testes internacionais que o nosso país também passa, nós estamos nas últimas posições. Eu não diria que a Obmep seria o milagre da educação brasileira para matemática, mas o Impa está implementando algumas medidas, tem a Obmep, tem os clubes de matemática, tem uma série de coisas aí.”

Victor considera a olimpíada um caminho para estimular o estudo e uma mudança de mentalidade. “Porque parece que é um negócio muito difícil, mas não é nada, a matemática é a coisa mais natural do mundo.”

Também no mestrado no Impa, ao mesmo tempo em que cursa a graduação em matemática pela Pontifícia Universidade Católica do Rio (PUC), Maria Clara Mendes Silva, 20 anos, foi descoberta pela Obmep e pela Olimpíada Brasileira de Matemática (OBM) em Pirajuba (MG). Ela relata que já gostava da matemática do colégio quando se inscreveu na Obmep pela primeira vez, no sexto ano, e ganhou uma menção honrosa.

“O ensino deveria ser mais voltado para coisas que incentivem o raciocínio, no lugar de ficarem repetindo contas e fórmulas”, diz Maria Clara.

Depois da menção honrosa na primeira participação da Obmep, Maria Clara conquistou o ouro nas demais olimpíadas que participou. Além disso, ganhou um bronze, uma prata e três ouros na OBM. A estudante também já participou de duas olimpíadas internacionais, em 2011 e 2012, e diz que se apaixonou pela beleza da matemática. “Eu acho que os resultados são bonitos, é você saber o que uma coisa implica. Eu acho isso bonito, essa precisão, essa exatidão”.

Ela pretende seguir carreira como pesquisadora, mas ainda não definiu a área de preferência. Quanto ao ensino da disciplina no Brasil, Maria Clara tem várias críticas. Para ela, o que se aprende no colégio não é matemática.

“Eu acho que a matemática no colégio é muito mecânica, isso é péssimo, aquilo não é matemática de verdade. Eu acho que à medida que você mostra o que realmente é matemática, que é pensar, deduzir as coisas, fica mais interessante naturalmente. Mesmo que a pessoa não queira ser matemática, ela vai gostar mais se for uma coisa bem apresentada, nem que seja a título de curiosidade.”

Alan Anderson da Silva Pereira, 22 anos, já está no doutorado no Impa. Alagoano de União dos Palmares, Alan começou a participar da Obmep no ensino médio, com 15 anos. Depois de uma medalha de prata e duas de ouro, decidiu cursar matemática na Universidade Federal de Alagoas (UFAL), mas trancou o curso para fazer o mestrado no Impa, já concluído. Ele conta que teve muito apoio e incentivo dos professores para seguir nos estudos.

“Quando lembro do meu professor do ensino fundamental, me sinto inspirado a continuar estudando”, conta.

Especialista na área de probabilidade combinatória, Alan diz que gostaria de trabalhar como professor e pesquisador da UFAL, “para retribuir tudo o que meu estado fez por mim, contribuir para o crescimento da universidade e melhorar as condições de Alagoas, que têm índices sociais tão ruins”.

Além do raciocínio analítico, Alan diz que foi atraído pela beleza artística na matemática. “Tem até estudo que fala que quando uma pessoa olha para um quadro ativa uma certa área do cérebro, e essa mesma área do cérebro é ativada quando um pesquisador resolve um problema ou vê um teorema que ele gosta, a matemática tem um lado artístico também”, garante.

“O que não torna a matemática tão popular é que geralmente só gosta dessa arte quem a faz, acho que é porque precisa ter um certo entendimento que não é direto, precisa de um esforço, mas quando você entende os parâmetros você acha bonito também”, completa Alan.

Para ele, o interesse pela matemática seria maior se os professores mostrassem aos estudantes de ensino fundamental e médio a verdadeira beleza da ciência. “Uma coisa que poderia melhorar a cultura da matemática é se existissem mais pessoas dispostas a apresentar a matemática de um jeito mais bonito. Por exemplo, se os doutores fossem dar aulas ou palestras talvez isso motivaria muito, dar a visão do pesquisador. É mostrar de cima, de cima é bonito, porque você vendo só pelos lados talvez tenha algumas arestas soltas.”

(Akemi Nitahara/Agência Brasil)

How to choose? (Aeon)

When your reasons are worse than useless, sometimes the most rational choice is a random stab in the dark

by Michael Schulson

Illustration by Tim McDonaghIllustration by Tim McDonagh

Michael Schulson is an American freelance writer. His work has appeared in Religion Dispatches, The Daily Beast, and Religion and Politics, among others. He lives in Durham, North Carolina.

We could start with birds, or we could start with Greeks. Each option has advantages.

Let’s flip a coin. Heads and it’s the Greeks, tails and it’s the birds.

Tails.

In the 1970s, a young American anthropologist named Michael Dove set out for Indonesia, intending to solve an ethnographic mystery. Then a graduate student at Stanford, Dove had been reading about the Kantu’, a group of subsistence farmers who live in the tropical forests of Borneo. The Kantu’ practise the kind of shifting agriculture known to anthropologists as swidden farming, and to everyone else as slash-and-burn. Swidden farmers usually grow crops in nutrient-poor soil. They use fire to clear their fields, which they abandon at the end of each growing season.

Like other swidden farmers, the Kantu’ would establish new farming sites ever year in which to grow rice and other crops. Unlike most other swidden farmers, the Kantu’ choose where to place these fields through a ritualised form of birdwatching. They believe that certain species of bird – the Scarlet-rumped Trogon, the Rufous Piculet, and five others – are the sons-in-law of God. The appearances of these birds guide the affairs of human beings. So, in order to select a site for cultivation, a Kantu’ farmer would walk through the forest until he spotted the right combination of omen birds. And there he would clear a field and plant his crops.

Dove figured that the birds must be serving as some kind of ecological indicator. Perhaps they gravitated toward good soil, or smaller trees, or some other useful characteristic of a swidden site. After all, the Kantu’ had been using bird augury for generations, and they hadn’t starved yet. The birds, Dove assumed, had to be telling the Kantu’something about the land. But neither he, nor any other anthropologist, had any notion of what that something was.

He followed Kantu’ augurers. He watched omen birds. He measured the size of each household’s harvest. And he became more and more confused. Kantu’ augury is so intricate, so dependent on slight alterations and is-the-bird-to-my-left-or-my-right contingencies that Dove soon found there was no discernible correlation at all between Piculets and Trogons and the success of a Kantu’ crop. The augurers he was shadowing, Dove told me, ‘looked more and more like people who were rolling dice’.

Stumped, he switched dissertation topics. But the augury nagged him. He kept thinking about it for ‘a decade or two’. And then one day he realised that he had been looking at the question the wrong way all the time. Dove had been asking whether Kantu’ augury imparted useful ecological information, as opposed to being random. But what if augury was useful precisely because it was random?

For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify

Tropical swidden agriculture is a fundamentally unpredictable enterprise. The success of a Kantu’ swidden depends on rainfall, pest outbreaks and river levels, among other factors. A patch of forest that might yield a good harvest in a rainy year could be unproductive in a drier year, or in a year when a certain pest spreads. And things such as pest outbreaks or the weather are pretty much impossible to predict weeks or months in the future, both for humans and for birds.

In the face of such uncertainty, though, the human tendency is to seek some kind of order – to come up with a systematic method for choosing a field site, and, in particular, to make decisions based on the conditions of the previous year.

Neither option is useful. Last year’s conditions have pretty much no bearing on events in the years ahead (a rainy July 2013 does not have any bearing on the wetness of July 2014). And systematic methods can be prey to all sorts of biases. If, for example, a Kantu’ farmer predicted that the water levels would be favourable one year, and so put all his fields next to the river, a single flood could wipe out his entire crop. For the Kantu’, the best option was one familiar to any investor when faced with an unpredictable market: they needed to diversify. And bird augury was an especially effective way to bring about that kind of diversification.

It makes sense that it should have taken Dove some 15 years to realise that randomness could be an asset. As moderns, we take it for granted that the best decisions stem from a process of empirical analysis and informed choice, with a clear goal in mind. That kind of decision-making, at least in theory, undergirds the ways that we choose political leaders, play the stock market, and select candidates for schools and jobs. It also shapes the way in which we critique the rituals and superstitions of others. But, as the Kantu’ illustrate, there are plenty of situations when random chance really is your best option. And those situations might be far more prevalent in our modern lives than we generally admit.

Over the millennia, cultures have expended a great deal of time, energy and ingenuity in order to introduce some element of chance into decision-making. Naskapi hunters in the Canadian province of Labrador would roast the scapula of a caribou in order to determine the direction of their next hunt, reading the cracks that formed on the surface of the bone like a map. In China, people have long sought guidance in the passages of the I Ching, using the intricate manipulation of 49 yarrow stalks to determine which section of the book they ought to consult. The Azande of central Africa, when faced with a difficult choice, would force a powdery poison down a chicken’s throat, finding the answer to their question in whether or not the chicken survived – a hard-to-predict, if not quite random, outcome. (‘I found this as satisfactory a way of running my home and affairs as any other I know of,’ wrote the British anthropologist E E Evans-Pritchard, who adopted some local customs during his time with the Azande in the 1920s).

The list goes on. It could – it does – fill books. As any blackjack dealer or tarot reader might tell you, we have a love for the flip of the card. Why shouldn’t we? Chance has some special properties. It is a swift, consistent, and (unless your chickens all die) relatively cheap decider. Devoid of any guiding mind, it is subject to neither blame nor regret. Inhuman, it can act as a blank surface on which to descry the churning of fate or the work of divine hands. Chance distributes resources and judges disputes with perfect equanimity.

The sanitising effect of augury cleans out any bad reasons

Above all, chance makes its selection without any recourse to reasons. This quality is perhaps its greatest advantage, though of course it comes at a price. Peter Stone, a political theorist at Trinity College, Dublin, and the author of The Luck of the Draw: The Role of Lotteries in Decision Making (2011), has made a career of studying the conditions under which such reasonless-ness can be, well, reasonable.

‘What lotteries are very good for is for keeping bad reasons out of decisions,’ Stone told me. ‘Lotteries guarantee that when you are choosing at random, there will be no reasons at all for one option rather than another being selected.’ He calls this the sanitising effectof lotteries – they eliminate all reasons from a decision, scrubbing away any kind of unwanted influence. As Stone acknowledges, randomness eliminates good reasons from the running as well as bad ones. He doesn’t advocate using chance indiscriminately. ‘But, sometimes,’ he argues, ‘the danger of bad reasons is bigger than the loss of the possibility of good reasons.’

For an example, let’s return to the Kantu’. Besides certain basic characteristics, when it comes to selecting a swidden site in the forest, there are no good reasons by which to choose a site. You just don’t know what the weather and pests will look like. As a result, any reasons that a Kantu’ farmer uses will either be neutral, or actively harmful. The sanitising effect of augury cleans out those bad reasons. The Kantu’ also establish fields in swampland, where the characteristics of a good site are much more predictable – where, in other words, good reasons are abundant. In the swamps, as it happens, the Kantu’ don’t use augury to make their pick.

Thinking about choice and chance in this way has applications outside rural Borneo, too. In particular, it can call into question some of the basic mechanisms of our rationalist-meritocratic-democratic system – which is why, as you might imagine, a political theorist such as Stone is so interested in randomness in the first place.

Around the same time that Michael Dove was pondering his riddle in a Kantu’ longhouse, activists and political scientists were beginning to revive the idea of filling certain political positions by lottery, a process known as sortition.

The practice has a long history. Most public officials in democratic Athens were chosen by lottery, including the nine archons who were chosen by sortition from a significant segment of the population. The nobles of Renaissance Venice used to select their head of state, the doge, through a complicated, partially randomised process. Jean-Jacques Rousseau, in The Social Contract (1762), argued that lotteries would be the norm in an ideal democracy, giving every citizen an equal chance of participating in every part of the government (Rousseau added that such ideal democracies did not exist). Sortition survives today in the process of jury selection, and it crops up from time to time in unexpected places. Ontario and British Columbia, for example, have used randomly selected panels of Canadian citizens to propose election regulations.

Advocates of sortition suggest applying that principle more broadly, to congresses and parliaments, in order to create a legislature that closely reflects the actual composition of a state’s citizenship. They are not (just to be clear) advocating that legislators randomly choosepolicies. Few, moreover, would suggest that non-representative positions such as the US presidency be appointed by a lottery of all citizens. The idea is not to banish reason from politics altogether. But plenty of bad reasons can influence the election process – through bribery, intimidation, and fraud; through vote-purchasing; through discrimination and prejudices of all kinds. The question is whether these bad reasons outweigh the benefits of a system in which voters pick their favourite candidates.

By way of illustration: a handful of powerful families and influential cliques dominated Renaissance Venice. The use of sortition in selection of the doge, writes the historian Robert Finlay in Politics in Renaissance Venice (1980), was a means of ‘limiting the ability of any group to impose its will without an overwhelming majority or substantial good luck’. Americans who worry about unbridled campaign-spending by a wealthy few might relate to this idea.

Or consider this. In theory, liberal democracies want legislatures that accurately reflect their citizenship. And, presumably, the qualities of a good legislator (intelligence, integrity, experience) aren’t limited to wealthy, straight, white men. The relatively homogeneous composition of our legislatures suggests that less-than-ideal reasons are playing a substantial role in the electoral process. Typically, we just look at this process and wonder how to eliminate that bias. Advocates of sortition see conditions ripe for randomness.

Once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance

It’s not only politics where the threat of bad reasons, or a lack of any good reasons, makes the luck of the draw seem attractive. Take college admissions. When Columbia University accepts just 2,291 of its roughly 33,000 applicants, as it did this year, it’s hard to imagine that the process was based strictly on good reasons. ‘College admissions are already random; let’s just admit it and begin developing a more effective system,’ wrote the education policy analyst Chad Aldeman on the US daily news site Inside Higher Ed back in 2009. He went on to describe the notion of collegiate meritocracy as ‘a pretension’ and remarked: ‘A lottery might be the answer.’

The Swarthmore College professor Barry Schwartz, writing in The Atlantic in 2012, came to a similar conclusion. He proposed that, once schools have narrowed down their applicant pools to a well-qualified subset, they could just draw names. Some schools in the Netherlands already use a similar system. ‘A lottery like this won’t correct the injustice that is inherent in a pyramidal system in which not everyone can rise to the top,’ wrote Schwartz. ‘But it will reveal the injustice by highlighting the role of contingency and luck.’ Once certain standards are met, no really good reasons remain to discriminate between applicant No 2,291 (who gets into Columbia) and applicant No 2,292 (who does not). And once all good reasons are eliminated, the most efficient, most fair and most honest option might be chance.

But perhaps not the most popular one. When randomness is added to a supposedly meritocratic system, it can inspire quite a backlash. In 2004, the International Skating Union (ISU) introduced a new judging system for figure-skating competitions. Under this system – which has since been tweaked – 12 judges evaluated each skater, but only nine of those votes, selected at random, actually counted towards the final tally (the ancient Athenians judged drama competitions in a similar way). Figure skating is a notoriously corrupt sport, with judges sometimes forming blocs that support each other’s favoured skaters. In theory, a randomised process makes it harder to form such alliances. A tit-for-tat arrangement, after all, doesn’t work as well if it’s unclear whether your partners will be able to reciprocate.

But the new ISU rules did more than simply remove a temptation to collude. As statisticians pointed out, random selection will change the outcome of some events. Backing their claims with competition data, they showed how other sets of randomly selected votes would have yielded different results, actually changing the line-up of the medal podium in at least one major competition. Even once all the skaters had performed, ultimate victory depended on the luck of the draw.

There are two ways to look at this kind of situation. The first way – the path of outrage – condemns a system that seems fundamentally unfair. A second approach would be to recognise that the judging process is already subjective and always will be. Had a different panel of 12 judges been chosen for the competition, the result would have varied, too. The ISU system simply makes that subjectivity more apparent, even as it reduces the likelihood that certain obviously bad influences, such as corruption, will affect the final result.

Still, most commentators opted for righteous outrage. That isn’t surprising. The ISU system conflicts with two common modern assumptions: that it is always desirable (and usually possible) to eliminate uncertainty and chance from a situation; and that achievement is perfectly reflective of effort and talent. Sortition, college admission lotteries, and randomised judging run against the grain of both of these premises. They embrace uncertainty as a useful part of their processes, and they fail to guarantee that the better citizen or student or skater, no matter how much she drives herself to success, will be declared the winner.

Let me suggest that, in the fraught and unpredictable world in which we live, both of those ideals – total certainty and perfect reward – are delusional. That’s not to say that we shouldn’t try to increase knowledge and reward success. It’s just that, until we reach that utopia, we might want to come to terms with the reality of our situation, which is that our lives are dominated by uncertainty, biases, subjective judgments and the vagaries of chance.

In the novel The Man in the High Castle (1962), the American sci-fi maestro Philip K Dick imagines an alternative history in which Germany and Japan win the Second World War. Most of the novel’s action takes place in Japanese-occupied San Francisco, where characters, both Japanese and American, regularly use the I Ching to guide difficult decisions in their business lives and personal affairs.

Something, somewhere, is always playing dice

As an American with no family history of divination, I’ll admit to being enchanted by Dick’s vision of a sci-fi world where people yield some of their decision-making power to the movements of dried yarrow stems. There’s something liberating, maybe, in being able to acknowledge that the reasons we have are often inadequate, or downright poor. Without needing to impose any supernatural system, it’s not hard to picture a society in which chance plays a more explicit, more accepted role in the ways in which we distribute goods, determine admissions to colleges, give out jobs to equally matched applicants, pick our elected leaders, and make personal decisions in our own lives.

Such a society is not a rationalist’s nightmare. Instead, in an uncertain world where bad reasons do determine so much of what we decide, it’s a way to become more aware of what factors shape the choices we make. As Peter Stone told me, paraphrasing Immanuel Kant, ‘the first task of reason is to recognise its own limitations’. Nor is such a society more riddled with chanciness than our own. Something, somewhere, is always playing dice. The roles of coloniser and colonised, wealthy and poor, powerful and weak, victor and vanquished, are rarely as predestined as we imagine them to be.

Dick seems to have understood this. Certainly, he embraced chance in a way that few other novelists ever have. Years after he wrote The Man in the High Castle, Dick explained to an interviewer that, setting aside from planning and the novelist’s foresight, he had settled key details of the book’s plot by flipping coins and consulting the I Ching.

14 July 2014

Agora manteiga faz bem e carne faz mal? (Jornal da Ciência)

JC e-mail 4973, de 16 de junho de 2014

Artigo de Luís Maurício Trambaioli para o Jornal da Ciência

Está sendo amplamente divulgado na mídia um recente estudo em que os pesquisadores de Harvard, a partir de questionário de perguntas feito em 1991 a enfermeiras, inferiu que mulheres teriam 22 % de risco relativo aumentado de câncer de mama quando consumindo uma porção a mais de carne vermelha que mulheres que consomem menos.

Entretanto, risco relativo não é risco absoluto, o qual pode ser calculado pelos dados originais. A chance de desenvolver a doença seria vista em 1 em cada 100.000 mulheres, e não em 22 em cada 100 mulheres como tem sido noticiado pela falsa impressão que o ‘risco relativo’ nos dá. Mais, esta incidência é exatamente em grupos de mulheres que mais fumam.

É importante cuidado na forma que se divulga as notícias de estudos epidemiológicos e feitos por apenas um grupo. Melhor seria obter um parecer de especialistas na área e ainda preferencialmente resultados advindos de mais estudos obtidos por outros pesquisadores, evitando assim bias e viés na ciência. Sob risco de acontecer acusações levianas como ocorrido na década de 80 que levou a demonizar a gordura saturada há exatos 30 anos sem evidências científicas que suportassem tal idéia, o que direcionou a humanidade ao desespero de consumo de alimentos sem gordura e compensando com a ingestão de mais “carboidratos complexos” (amido) e baixos em micronutrientes. E o resultado foi a epidemia de diabetes e obesidade (chamado no exterior de diabesity), doenças cardiovasculares, câncer, dentre outras.

E agora, o que cortar do bacon: a gordura ou a carne ?

Luís Maurício Trambaioli é professor associado da Faculdade de Farmácia da UFRJ e pesquisador associado do INMETRO

Referências:

BMJ – “Dietary protein sources in early adulthood and breast cancer incidence: prospective cohort study” – http://dx.doi.org/10.1136/bmj.g3437

Resposta ao estudo: http://www.bmj.com/content/348/bmj.g3437?tab=responses

Time Magazine, 26/03/1984 – And Now the Bad News –
http://content.time.com/time/specials/2007/article/0,28804,1704183_1704257_1704499,00.html

Time Magazine, 23/06/2014 – Ending the War on Fat – http://time.com/2863227/ending-the-war-on-fat/
http://oglobo.globo.com/sociedade/saude/carne-vermelha-pode-aumentar-risco-de-cancer-de-mama-diz-estudo-de-harvard-12803653

We Have a Weather Forecast For Every World Cup Match, Even the Ones a Month Away (Five Thirty Eight)

It’s the moment every soccer fan’s been waiting for. The teams are out on the field and the match is about to begin. Then comes the rain. And then the thunder. And then the lightning. Enough of it that the match is delayed.

With the World Cup taking place in a country comprising several different ecosystems — a rain forest among them — you’re going to be hearing a lot about the weather in Brazil over the next month.

But we don’t have to wait until the day of — or even five days before — any given match to get a sense of what the weather will be. We already know the broad outlines of the next month of weather in Brazil — June and July have happened before, after all, and somebody kept track of whether it rained.

I did something like this for the Super Bowl in New York, when I provided a climatological forecast based on years worth of historical data. This isn’t the most accurate way to predict the weather — seven days before a match there will be far better forecasts — but it is a solid way to do it many weeks in advance.

I collected past weather data for the World Cup’s timespan (mid-June through mid-July) from WeatherSpark and Weather Underground for the observation stations closest to the 12 different World Cup sites. Keep in mind, the data for the different areas of Brazil hasn’t been collected for as long as it has in the United States. In some cases, we only have records since the late 1990s, which is about half as many years as I’d like to make the best climatological assessment. Still, history can give us an idea of the variability of the weather in Brazil.

You can see what high temperatures have looked like for the 12 World Cup sites in the table below. I’ve taken the average, as well as the 10th, 25th, 75th and 90th percentile for past high temperatures. This gives us a better idea of the range of what could occur than just the average. Remember, 20 percent of high temperatures have fallen out of this range.  (For games starting in the early evening, knock off a few degrees to get the expected average.)

enten-feature-worldcupweather3

What we see is that the weather can be quite comfortable or hot, depending on the site. In the southern coastal region, we see high temperatures that average below 70 degrees Fahrenheit in the cities of Curitiba and Porto Alegre. (I’ve presented all temperatures in Fahrenheit.) It may seem odd to you that southern areas are actually coolest, but remember that this is the southern hemisphere, so everything’s topsy-turvy for a Northerner. It’s winter in Brazil, and climatology suggests that we shouldn’t be surprised if the high temperature is below 60 degrees at one of these sites.

Host sites for the 2014 World Cup.
Wikimedia CommonsHost sites for the 2014 World Cup.

But most of the country is not like these two sites. Belo Horizonte and Brasilia reach the mid- to high 70s usually, but don’t go too much higher because of their elevation (2,720 feet for the former and 3,500 feet for the latter). From Rio de Janeiro northward, temperatures average 80 degrees or greater, but winds from the ocean will often keep them from getting out of hand.

The site tied for the highest median temperature is Manaus, which is also surrounded by the Amazonian rainforest, making it the most interesting site climatologically. There’s a 15 percent chance that it will rain in Manaus on any given day during the tournament. In small quantities, rain can help a passing game by making the grass slick, but if there’s too much precipitation, it can slow the ball significantly as the pitch gets waterlogged. And that doesn’t even get to the threat of lightning, which can halt a game completely.

But Manaus isn’t the site with the highest chance of rain. (Just the highest chance of thunderstorms.) To figure out what is, I looked at the average rainfall and thunderstorm tallies during the 1 p.m. to 6 p.m. hours during June and July in past years. From there I estimated the chance of rain during two-hour stretches in the afternoon and early evening, rather than for the entire day.

So here are approximations for each site on rain and thunderstorms during the games:

enten-weather-table-1It probably won’t rain during any given match, but if it does it’s likely to be in the sites closest to the tropics in the north and thehumid subtropical climate in the south. Recife, for example, has the best chance of rain of any site in the country, in part because it’s right where a lot of different air masses combine, which makes the weather there somewhat more unpredictable.

Thunderstorms, on the other hand, rarely occur anywhere besides Manaus, where the chance of a thunderstorm in a given afternoon hour is in the double digits. Manaus is also where the United States will be playing against Portugal in its second match; climatology suggests it should be a muggy game.

The Americans’ other games are likely to be hot but dry. The United States’ first match, against Ghana, is in Natal on Monday, a city that normally is expected to offer a high temperature around 84 degrees, with a slightly cooler temperature by the evening game time. The current forecasts (based on meteorological data, rather than climatology) are calling for something around normal with around a 15 percent chance of rain, as we’d expect. The weather for the U.S. team’s third match, on the coast in Recife, should be about the same. Thunderstorms probably won’t interrupt the game, but rain is possible.

Most likely, though, the weather will hold up just fine. The optimistic U.S. fan can safely engage in blue-sky thinking — for the team’s chances, and for the skies above it, even if our coach is finding another way to rain on the parade.

Formigas são mais eficientes em busca do que o Google, diz pesquisa (O Globo)

JC e-mail 4960, de 27 de maio de 2014

O estudo mostrou que insetos desenvolvem complexos sistemas de informação para encontrar alimentos

Todos aprendemos desde pequenos que as formigas são prudentes, e que enquanto a cigarra canta e toca violão no verão, esses pequenos insetos trabalham para coletar alimento suficiente para todo o inverno. No entanto, segundo estudo publicado na revista Procedimentos da Academina Nacional de Ciências, elas não só são precavidas, mas também “muito mais eficientes que o próprio Google”.

Para chegar a essa inusitada conclusão, cientistas chineses e alemães utilizaram algorítimos matemáticos que tentam enxergar ordem em um aparente cenário caótico ao criar complexas redes de informação. Em fórmulas e equações, descobriu-se que as formigas desenvolvem caminhos engenhosos para procurar alimentos, dividindo-se em grupos de “exploradoras” e “agregadoras”.

Aquela formiga encontrada solitária que você encontra andando pela casa em um movimento aparentemente aleatório é, na verdade, a exploradora, que libera feromônios pelo caminho para que as agregadoras sigam o trajeto posteriormente com um maior contigente. Com base no primeiro trajeto, novas rotas mais curtas e eficientes são refinadas. Se o esforço for repetido persistentemente, a distância entre os insetos e a comida é drasticamente reduzida.

– Enquanto formigas solitárias parecem andar em movimento caótico, elas rapidamente se tornam uma linha de formigas cruzando o chão em busca de alimento – explicou ao The Independent o co-autor do estudo, professor Jurgen Kurths.

Por isso, segundo Kurths, o processo de busca de um alimento realizado pelos insetos é “muito mais eficiente” do que a ferramenta de pesquisa do Google.

Os modelos matemáticos do estudo podem ser igualmente aplicados a outros movimentos coletivos de animais, inclusive em humanos. A ferramenta pode ser útil, por exemplo, para entender o comportamento das pessoas em redes sociais e até em ambientes de transporte público lotado.

(O Globo com Agências)
http://oglobo.globo.com/sociedade/ciencia/formigas-sao-mais-eficientes-em-busca-do-que-google-diz-pesquisa-12614920#ixzz32vCQx2oB

Important and complex systems, from the global financial market to groups of friends, may be highly controllable (Science Daily)

Date: March 20, 2014

Source: McGill University

Summary: Scientists have discovered that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

All complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled, researchers say. Credit: © Artur Marciniec / Fotolia

We don’t often think of them in these terms, but our brains, global financial markets and groups of friends are all examples of different kinds of complex networks or systems. And unlike the kind of system that exists in your car that has been intentionally engineered for humans to use, these systems are convoluted and not obvious how to control. Economic collapse, disease, and miserable dinner parties may result from a breakdown in such systems, which is why researchers have recently being putting so much energy into trying to discover how best to control these large and important systems.

But now two brothers, Profs. Justin and Derek Ruths, from Singapore University of Technology and Design and McGill University respectively, have suggested, in an article published in Science, that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

They reached this conclusion by surveying the inputs and outputs and the critical control points in a wide range of systems that appear to function in completely different ways. (The critical control points are the parts of a system that you have to control in order to make it do whatever you want — not dissimilar to the strings you use to control a puppet).

“When controlling a cell in the body, for example, these control points might correspond to proteins that we can regulate using specific drugs,” said Justin Ruths. “But in the case of a national or international economic system, the critical control points could be certain companies whose financial activity needs to be directly regulated.”

One grouping, for example, put organizational hierarchies, gene regulation, and human purchasing behaviour together, in part because in each, it is hard to control individual parts of the system in isolation. Another grouping includes social networks such as groups of friends (whether virtual or real), and neural networks (in the brain), where the systems allow for relatively independent behaviour. The final group includes things like food systems, electrical circuits and the internet, all of which function basically as closed systems where resources circulate internally.

Referring to these groupings, Derek Ruths commented, “While our framework does provide insights into the nature of control in these systems, we’re also intrigued by what these groupings tell us about how very different parts of the world share deep and fundamental attributes in common — which may help unify our understanding of complexity and of control.”

“What we really want people to take away from the research at this point is that we can control these complex and important systems in the same way that we can control a car,” says Justin Ruths. “And that our work is giving us insight into which parts of the system we need to control and why. Ultimately, at this point we have developed some new theory that helps to advance the field in important ways, but it may still be another five to ten years before we see how this will play out in concrete terms.”

Journal Reference:

  1. Justin Ruths and Derek Ruths. Control Profiles of Complex NetworksScience, 2014 DOI: 10.1126/science.1242063

Discurso sobre o sonho pode ajudar no diagnóstico de doenças mentais (Fapesp)

Pesquisadores brasileiros desenvolvem técnica de análise matemática de relatos sobre sonhos, capaz de auxiliar na identificação de sintomas de esquizofrenia e bipolaridade (imagem: divulgação)

17/03/2014

Por Elton Alisson

Agência FAPESP – A pista dada por Sigmund Freud (1856-1939) no livro “A intepretação dos sonhos, de 1899, de que “os sonhos são a estrada real para o inconsciente”, chave para a Psicanálise, também pode ser útil na Psiquiatria, no diagnóstico clínico de transtornos mentais, como a esquizofrenia e a bipolaridade, entre outras.

A constatação é de um grupo de pesquisadores do Instituto do Cérebro da Universidade Federal do Rio Grande do Norte (UFRN), em colaboração com colegas do Departamento de Física da Universidade Federal de Pernambuco (UFPE) e do Centro de Pesquisa, Inovação e Difusão em Neuromatemática (Neuromat) – um dos CEPIDs da FAPESP.

Eles desenvolveram uma técnica de análise matemática de relatos de sonhos que poderá, no futuro, auxiliar no diagnóstico de psicoses.

A técnica foi descrita em um artigo publicado em janeiro na Scientific Reports, revista de acesso aberto do grupo Nature.

“A ideia é que a técnica, relativamente simples e barata, seja utilizada como ferramenta para auxiliar os psiquiatras no diagnóstico clínico de pacientes com transtornos mentais de forma mais precisa”, disse Mauro Copelli, professor da UFPE e um dos autores do estudo, à Agência FAPESP.

De acordo com Copelli – que realizou mestrado e doutorado parcialmente com Bolsa da FAPESP –, apesar dos esforços seculares para aumentar a precisão da classificação dos transtornos mentais, o atual método de diagnóstico de psicoses tem sido duramente criticado.

Isso porque ele ainda peca pela falta de objetividade e pelo fato de a maioria dos transtornos mentais não contar com biomarcadores (indicadores biométricos) capazes de auxiliar os psiquiatras a diagnosticá-los com maior exatidão.

Além disso, pacientes com esquizofrenia ou transtorno bipolar muitas vezes apresentam sintomas psicóticos comuns, como alucinações, delírios, hiperatividade e comportamento agressivo – o que pode comprometer a precisão do diagnóstico.

“O diagnóstico dos sintomas psicóticos é altamente subjetivo”, afirmou Copelli. “Por isso mesmo, a última versão do Manual Diagnóstico e Estatístico de Transtornos Mentais [publicado pela Associação Americana de Psiquiatria em 2013] foi muito atacada”, avaliou.

A fim de desenvolver um método quantitativo para avaliar sintomas psiquiátricos, os pesquisadores gravaram, com o consentimento dos envolvidos, os relatos dos sonhos de 60 pacientes voluntários, atendidos no ambulatório de psiquiatria de um hospital público em Natal (RN).

Alguns dos pacientes já tinham recebido o diagnóstico de esquizofrenia, outros de bipolaridade e os demais, que formaram o grupo de controle, não apresentavam sintomas de transtornos mentais.

Os relatos dos sonhos dos pacientes, feitos à psiquiatra Natália Bezerra Mota, doutoranda na URFN e primeira autora do estudo, foram transcritos.

As frases dos discursos dos pacientes foram transformadas por um software desenvolvido por pesquisadores do Instituto do Cérebro em grafos – estruturas matemáticas similares a diagramas nas quais cada palavra dita pelo paciente foi representada por um ponto ou nó, como o feito em uma linha de crochê.

Ao analisar os grafos dos relatos dos sonhos dos três grupos de pacientes os pesquisadores observaram que há diferenças muito claras entre eles.

O tamanho, em termos de quantidade de arestas ou links, e a conectividade (relação) entre os nós dos grafos dos pacientes diagnosticados com esquizofrenia, bipolaridade ou sem transtornos mentais apresentaram variações, afirmaram os pesquisadores.

“Os pacientes com esquizofrenia, por exemplo, fazem relatos que, quando representados por grafos, possuem menos ligações do que os demais grupos de pacientes”, disse Mota.

Diferenças de discursos

Segundo os pesquisadores, a diferenciação de pacientes a partir da análise dos grafos de relatos dos sonhos foi possível porque suas características de fala também são bastante diversificadas.

Os pacientes esquizofrênicos costumam falar de forma lacônica e com pouca digressão (desvio de assunto) – o que explica por que a conectividade e a quantidade de arestas dos grafos de seus relatos são menores em comparação às dos bipolares.

Por sua vez, pacientes com transtorno bipolar tendem a apresentar um sintoma oposto ao da digressão, chamado logorreia ou verborragia, falando atabalhoadamente frases sem sentido – chamado na Psiquiatria de “fuga de ideias”.

“Encontramos uma correlação importante dessas medidas feitas por meio das análises dos grafos com os sintomas negativos e cognitivos medidos por escalas psicométricas utilizadas na prática clínica da Psiquiatria”, afirmou Mota.

Ao transformar essas características marcantes de fala dos pacientes em grafos é possível dar origem a um classificador computacional capaz de auxiliar os psiquiatras no diagnóstico de transtornos mentais, indicou Copelli.

“Todas as ocorrências no discurso dos pacientes com transtornos mentais que no grafo têm um significado aparentemente geométrico podem ser quantificadas matematicamente e ajudar a classificar se um paciente é esquizofrênico ou bipolar, com uma taxa de sucesso comparável ou até mesmo melhor do que as escalas psiquiátricas subjetivas utilizadas para essa finalidade”, avaliou.

O objetivo dos pesquisadores é avaliar um maior número de pacientes e calibrar o algoritmo (sequência de comandos) do software desenvolvido para transformar os relatos dos sonhos em grafos que possam ser usados em larga escala na prática clínica de Psiquiatria.

Apesar de utilizada inicialmente para o diagnóstico de psicoses, a técnica poderá ser expandida para diversas outras finalidades, contou Mota.

“Ela poderá ser utilizada, por exemplo, para buscar mais informações sobre estrutura de linguagem aplicadas à análise de relatos de pessoas não apenas com sintomas psicóticos, mas também em diferentes situações de declínio cognitivo, como demência, ou em ascensão, como durante o aprendizado e o desenvolvimento da fala e escrita”, indicou a pesquisadora.

Papel dos sonhos

Os pesquisadores também desenvolveram e analisaram, durante o estudo, os grafos de relatos sobre atividades realizadas pelos pacientes voluntários na véspera do sonho.

Os grafos desses relatos do dia a dia, chamados de “relatos de vigília”, não foram tão indicativos do tipo de transtorno mental sofrido pelo paciente como outros, disse Copelli.

“Conseguimos distinguir esquizofrênicos dos demais grupos usando a análise dos grafos dos relatos de vigília, mas não conseguimos distinguir bem os bipolares do grupo de controle dessa forma”, contou.

Os pesquisadores ainda não sabem por que os grafos dos discursos sobre o sonho são mais informativos sobre psicose do que os grafos da vigília.

Algumas hipóteses esmiuçadas na pesquisa de doutorado de Mota estão relacionadas a mecanismos fisiológicos de formação de memória.

“Acreditamos que, por serem memórias mais transitórias, os sonhos podem ser mais demandantes cognitivamente e ter maior impacto afetivo do que as memórias relacionadas ao cotidiano, e isso pode tornar seus relatos mais complexos”, contou a pesquisadora.

“Outra hipótese é que o sonho está relacionado a um evento vivenciado exclusivamente por uma pessoa, sem ser compartilhado com outras, e por isso talvez seja mais complexo de ser explicado do que uma atividade relacionada ao cotidiano”, disse.

Para testar essas hipóteses, os pesquisadores pretendem ampliar a coleta de dados aplicando questionários em pacientes com registro de primeiro surto psicótico, com o objetivo de esclarecer se outros tipos de relatos, como de memórias antigas, podem se equiparar ao sonho em termos de informação psiquiátrica. Eles também querem verificar se podem usar o método para identificar sinais ou grupo de sintomas (pródromo) e acompanhar efeitos de medicações.

“Pretendemos investigar em laboratório, com eletroencefalografia de alta densidade e diversas técnicas de mensuração de distâncias semânticas e análise de estrutura de grafos, de que forma os estímulos recebidos imediatamente antes de dormir influenciam os relatos de sonhos produzidos ao despertar”, disse Sidarta Ribeiro, pesquisador do Instituto do Cérebro da UFRN.

“Estamos particularmente interessados nos efeitos distintos de imagens com valor afetivo”, afirmou Ribeiro, que também é pesquisador associado do Neuromat.

O artigo Graph analysis of dream reports is especially informative about psychosis (doi: 10.1038/srep03691), de Mota e outros, pode ser lido na revista Scientific Reports emwww.nature.com/srep/2014/140115/srep03691/full/srep03691.html.

Luciana Vanni Gatti: Na trilha do carbono (Fapesp)

MARCOS PIVETTA e RICARDO ZORZETTO | Edição 217 – Março de 2014

© LÉO RAMOS

022-027_Entrevista_217

Emoldurado por um nascer do sol no município acreano de Senador Guiomard, um castanheiro-do-pará ocupou o primeiro plano da capa de 6 de fevereiro da revista científica inglesa Nature, uma das mais prestigiadas do mundo. A árvore tropical simbolizava a Amazônia, tema central de um artigo que teve como autor principal Luciana Vanni Gatti, 53 anos, coordenadora do Laboratório de Química Atmosférica do Instituto de Pesquisas Energéticas e Nucleares (Ipen). Luciana e os coautores do trabalho calcularam o chamado balanço de carbono da floresta amazônica  que é uma comparação entre a quantidade de carbono na forma de dióxido de carbono (CO2) emitida e a absorvida pela bacia Amazônica – em dois anos consecutivos que apresentaram temperaturas acima da média dos últimos 30 anos, mas uma variação significativa no regime de chuvas.

O ano de 2010 foi marcado por uma estiagem extrema e o de 2011 por chuvas acima da média. “Vimos que a Amazônia se comportou como uma fonte de carbono no ano seco quando também levamos em conta as queimadas”, diz Luciana, que dividiu a coautoria do artigo com Emanuel Gloor, da Universidade de Leeds, na Inglaterra, e John Miller, da Universidade do Colorado, em Boulder, nos Estados Unidos. “Mas, no ano úmido, seu balanço de carbono foi próximo a neutro, a quantidade emitida e a absorvida foram mais ou menos equivalentes.” Os dados do estudo sobre gases atmosféricos foram obtidos por uma iniciativa comandada desde 2010 pela brasileira, cujos esforços de pesquisa fazem parte do Amazonica (Amazon Integrated Carbon Analysis), um grande projeto internacional coordenado por Gloor. A cada duas semanas, pequenos aviões alçam voo de quatro localidades amazônicas (Santarém, Alta Floresta, Rio Branco e Tabatinga) e coletam amostras de ar ao longo de um perfil vertical descendente, entre 4,4 quilômetros de altitude e 200 ou 300 metros do solo. As amostras são enviadas para o laboratório de Luciana no Ipen onde são quantificados gases de efeito estufa, entre outros. No trabalho foram estudados o CO2, o monóxido de carbono (CO) e o hexafluoreto de enxofre (SF6).

Os resultados foram interpretados como preocupantes, pois sugerem que a capacidade de a Amazônia absorver da atmosfera o CO2, principal gás de efeito estufa, parece estar associada à quantidade de chuvas. Em anos secos, como 2010, ocorrem mais incêndios em áreas com floresta e também nas já desmatadas, que liberam grandes quantidades de CO, e o estresse hídrico aparentemente reduz os níveis de fotossíntese das plantas e as fazem retirar menos CO2 da atmosfera. Nesta entrevista, Luciana fala dos resultados e das implicações de seu estudo e conta um pouco de sua carreira.

Você esperava que o trabalho parasse na capa daNature?
Mais pela importância do tema do que pela qualidade do trabalho, esperava que saísse sim, mas não imaginava que fosse capa. Vou a muitos congressos e encontro gente do mundo inteiro falando da Amazônia. Essas pessoas não têm ideia do que é a região. Nunca vieram aqui e ficam fazendo modelagem, extrapolando dado local como se fosse representativo de toda a região. Vejo resultados muito variados de modelagem, mostrando a Amazônia como sendo desde grande absorvedora até grande emissora de CO2. A Amazônia faz diferença no balanço global de carbono. Por isso, descobrir qual é o seu peso nesse balanço é tremendamente importante. Hoje do que mais se fala? De mudança climática. O planeta está ficando hostil ao ser humano. Mas inicialmente pretendíamos publicar na Science.

Por quê?
Era meu objetivo porque o [Simon] Lewis [pesquisador da Universidade de Leeds] publicou na Science em 2010 um paper com conclusões que queríamos contestar. Ele disse que a Amazônia tinha emitido naquele ano o equivalente à queima de combustíveis fósseis de todo os Estados Unidos. Era um trabalho feito com modelagem e tinha chegado a um resultado muito exagerado. Queria publicar na Science para responder ao Lewis. Chegamos a submeter para a revista uma versão de nosso artigo, na época apenas com os dados de 2010, um ano muito seco. Era um trabalho que determinava o balanço de carbono naquele ano. A Science disse que era um estudo relevante, mas que tinha um escopo técnico demais, fora de sua linha editorial. Nem mandaram o artigo para ser analisado por referees e sugeriram que o enviássemos para uma revista mais especializada. Mas, quando analisamos os dados de 2011, encontramos uma situação completamente diferente daquela de 2010. O entendimento de por que os efeitos sobre o balanço de carbono foram tão diferentes em 2010 e 2011 foi o que fez a Nature gostar do paper. Por esse motivo, sou a favor de estudos de longa duração. Se tivesse feito uma campanha em 2010, ia achar que a Amazônia se comporta daquele jeito todos os anos.

Em editorial, a Nature disse que os resultados do artigo são uma notícia ruim. Concorda com essa avaliação?
Concordo. É uma notícia bem triste. Não esperávamos que a Amazônia pudesse apresentar um resultado tão baixo de absorção de carbono. Nunca ninguém mediu isso da forma como fizemos agora. Existem vários trabalhos que, a partir de um dado local, extrapolam uma média para a região. Mas tirar uma média é válido? Já sabemos que existe muita variação dentro da Amazônia.

Qual era o senso comum sobre o balanço de carbono na região?
Que a Amazônia absorvia em torno de meio petagrama de carbono por ano, era o que se estimava. Todo mundo acha que a Amazônia é um grande sink [sumidouro] de carbono. Mas em 2010, por causa do estresse hídrico, as plantas fizeram menos fotossíntese e aumentou sua mortalidade. Então a floresta na média absorveu apenas 0,03 petagrama de carbono. Muito pouco. Isso equivale a 30 milhões de toneladas de carbono. O valor é igual à margem de erro do estudo. Devido a queimadas propositais e a incêndios florestais, a Amazônia emitiu 0,51 petagrama de carbono (510 milhões de toneladas de carbono). Portanto, no balanço de carbono a emissão foi muito maior do que a absorção. É uma notícia horrível. Em 2011, que foi mais úmido, o balanço foi praticamente neutro [a floresta emitiu 0,30 petagrama de carbono, mas abosorveu 0,25 petagrama, oito vezes mais que no ano anterior].

A quantidade de chuvas é o fator principal para entender o balanço de carbono na Amazônia?
Não é bem isso. Nosso estudo mostra que a disponibilidade de água é um fator mais importante do que a temperatura. É questão de peso. Mas isso não quer dizer que a temperatura não seja importante. A grande diferença entre 2010 e 2011 foi a questão hídrica, só que ela também está ligada à variação de temperatura. É difícil dar uma resposta definitiva. Esse dado indica que não dá para fazer modelo de previsão climática levando em conta apenas o aumento de temperatura. É preciso colocar todas as consequências desse aumento de temperatura. Um modelo muito simplista vai ficar longe do que vai acontecer no futuro.

A seca de 2010 e as chuvas de 2011 foram anormais para a Amazônia?
Não podemos dizer que a chuva de 2011 foi extrema, porque ela não foi acima da máxima histórica. Foi um ano chuvoso, acima da média, mas não incomum. É uma questão de definição. Houve outros anos com níveis semelhantes de precipitação. A seca de 2010 foi extrema, incomum, abaixo da mínima histórica. No entanto, não posso dizer que a capacidade de absorção em 2011 equivale à média de um ano chuvoso. Em 2010, a floresta tinha sofrido muito com a seca e, no ano seguinte, a vegetação ainda poderia estar sob efeito do impacto desse estresse absurdo. A história de um ano pode estar influenciando o ano seguinte. Pode ser que, depois de um ano chuvoso, o sequestro de carbono seja maior se houver em seguida um segundo ano também chuvoso.

Os dados de um ano não devem ser analisados de forma isolada.
Exatamente! Por isso, temos que realizar estudos de longo prazo. Quando participei de campanhas e vi que havia essa variabilidade de ano para ano, desisti desse tipo de estudo. Acho vantajoso o fato de se reunir [em campanhas] muitos pesquisadores de varias áreas e os estudos de uns complementarem o de outros. Os avanços em alguns aspectos do conhecimento são muitos nesse tipo de situação, mas não no sentido de se conhecer um valor significativo que represente toda a Amazônia. Nesse aspecto existe muita variabilidade. Não dá pra estudar um mês na estação seca e outro na chuvosa e achar que esses períodos representam tudo o que ocorre no período de estiagem e no úmido e se estender o resultado para todo o ano. Esse número pode ser o dobro ou a metade do real, por exemplo. Durante nosso estudo de 10 anos em Santarém, vi essa grande variabilidade. Sou muito perfeccionista. Se sei que meu número pode estar muito errado, isso não me satisfaz.

Com dados de apenas dois anos, é seguro chegar a alguma conclusão sobre o balanço de carbono na Amazônia?
Como 2010 foi tão diferente de 2011, concluímos que nem com quatro ou cinco anos, que era nosso plano original, chegaremos a uma média conclusiva. Agora estamos à procura de recursos para financiar a continuidade desse projeto por uma década. A média de 10 anos é suficiente? Sim, estudos sobre o ciclo de carbono são mais conclusivos se forem decadais. Mas é importante entender que a Amazônia está sendo alterada, tanto pelo homem como pelo clima, que o homem também está alterando. Então o que acharmos de resultado mediano pode ser diferente do que ocorreu na década passada e na retrasada. Vamos submeter um projeto para continuar esse estudo. Mas, além de recursos para as medidas, precisamos de recursos para ter uma equipe para conduzir o projeto também. Sou a única funcionária do Ipen atuando no projeto, todos os demais são pagos pelos projetos envolvidos nesse estudo. E, sem essa equipe tão afinada, não existiria esse projeto incrível. É um esforço muito grande de muitas pessoas.

Alguns estudos sugerem que o aumento dos gases de efeito estufa pode levar algumas plantas a fazer mais fotossíntese. Isso não poderá alterar o balanço de carbono na Amazônia no longo prazo?
Não é só isso. É verdade que mais COna atmosfera estimula a planta a fazer mais fotossíntese. Mas há outros mecanismos. Em uma situação de estresse hídrico, a raiz absorve menos água. A planta diminui seu metabolismo e assim absorve menos carbono. O que sabemos ao certo é que a floresta reduz sua capacidade de absorver carbono com a diminuição da disponibilidade de água.

Como o ar coletado em quatro pontos da Amazônia pode representar a atmosfera de toda essa enorme região?
Em qualquer um dos pontos, as amostras coletadas nos voos representam uma massa de ar que passou por várias partes da Amazônia, desde a costa brasileira até o ponto de coleta e, no caso de Santarém, até de trechos do Nordeste. Se ela levou sete dias para chegar até o ponto de coleta, representa uma semana e não apenas o momento em que foi obtida. Ela guarda toda a história do caminho que percorreu dentro da Amazônia nesses sete dias, de todas as emissões e absorções que ocorreram nesse percurso. Não estamos, portanto, coletando uma amostra de ar referente a uma hora. Estamos coletando a história de uma coluna de ar que viajou todo esse caminho desde a costa brasileira. Calculamos o caminho que cada massa de ar fez até ser coletada em cada altitude amostrada.

Esse método não tem alguma limitação?
A grande limitação é só termos feito coletas até 4,4 quilômetros de altura. O que ocorre acima disso está fora da nossa área de medição. Uma nuvem convectiva pode levar o ar que estava embaixo para cima e vice-versa. Isso pode fazer com que nossa coluna de ar seja parcialmente levada para uma altitude acima do nosso limite de voo. Nesse caso, perdemos informação. Essa é a maior fonte de erro do nosso método. O ideal seria voarmos a até 8 ou 12 quilômetros de altura. Já começamos a fazer isso no inicio de 2013 no ponto de estudo próximo a Rio Branco e os resultados são muito animadores. Nessa faixa de altitude, em um ano, não observamos uma variação muito significativa que indique um erro grande. Isso é muito animador.

Os quatro pontos de coleta de amostras de ar se comportam iguais?
O ponto próximo a Santarém é diferente de tudo em termos de resultado. Vamos pensar em sua área de influência. Todo litoral tem uma densidade populacional grande. Nesse ponto da região amazônica temos a maior relação área/população. Nossos dados coletados ali sofrem influência urbana, antropogênica e de combustíveis fósseis que não aparecem tanto em outros pontos da Amazônia. Haveria influência inclusive da poluição vinda das cidades do Nordeste. Às vezes, na estação chuvosa, observamos nesse ponto emissão de carbono, enquanto nos outros três pontos que monitoramos há absorção.

O que explica essa diferença?
Podem ser as atividades antropogênicas em áreas próximas a Santarém. As Guianas estão acima do equador. Quando é a estação chuvosa no Brasil, lá é a seca. Tem também as queimadas e as atividades antrópicas nas cidades próximas de nosso litoral. Dizem para eu parar as medidas em Santarém, que não representa a Amazônia. Mas tenho uma série histórica de 14 anos. O Brasil não tem série histórica de medidas desse tipo. Se pararmos de medir em Santarém… Fico em um dilema.

Mas Santarém não representa uma parte importante da Amazônia oriental?
Na área de influência de Santarém, 40% é floresta. Se considerar a área de toda a floresta amazônica, Santarém pega uma “fatiazinha”, entre aspas porque é gigante, de 20%. Só descobrimos isso quando passamos a estudar o outro lado da Amazônia. As amostras obtidas no avião são resultantes de uma história de tudo o que aconteceu antes de o ar chegar lá. Com exceção do monóxido de carbono, que vem das queimadas de floresta, não há como saber a contribuição de cada fonte de carbono. No caso de Santarém, essa abordagem não funciona muito bem. Acreditamos que uma parte do monóxido de carbono venha de outras queimas de biomassa, talvez de combustíveis fósseis e não basicamente da queima de vegetação florestal.

Como começou seu trabalho na Amazônia?
Participei do LBA [Experimento de Grande Escala da Biosfera-Atmosfera na Amazônia] desde o início, em 1998. Fiz campanhas de campo. Há 10 anos, comecei as medidas sistemáticas em Santarém. Até então, as amostras dos perfis de ar iam para os Estados Unidos para serem analisadas em um laboratório da Noaa [National Oceanic and Atmospheric Administration]. Em 2004 montei meu laboratório no Ipen e as amostras pararam de ir para os Estados Unidos. Meu laboratório é uma réplica do laboratório da Noaa, o melhor laboratório do planeta de gás de efeito estufa. Fiz tudo igualzinho e importamos uma réplica do laboratório deles. Botamos tudo dentro de caixas e trouxemos para cá. Tudo, tudo. Do mouse à estante. O sistema todinho encaixotado. Podemos medir CO2, CH4, N2O, CO, SF6 e H2. O laboratório foi pago pela Nasa e o usamos no LBA. Tudo o que aprendi nessa área foi com a equipe da Global Monitoring Division da Noaa. Passei três temporadas lá. Estamos juntos sempre, eles têm acesso ao Magic, que é esse nosso sistema de análise. Tudo é feito em parceria com eles. São 10 anos trabalhando com esses caras. Sou filha deles. Depois de 2004 conseguimos uma frequência maior de medidas em Santarém. Naquele ano, voamos na estação seca e na estação chuvosa pela primeira vez. Tentamos também realizar medições em Manaus, mas dos três anos de coletas tivemos problemas de autorização de voo em um ano e no ano seguinte com as análises de CO2. Então o dinheiro acabou. Como só tinha dados de um ano inteiro, acabei nunca publicando essas informações. Mas isso é uma falha que tenho de corrigir. Ficamos só em Santarém até 2009, quando ganhamos verba da FAPESP e da Nerc [agência do Reino Unido de financiamento de pesquisas] e passamos a fazer medições em mais três pontos.

Quando os estudos se restringiam a Santarém foi possível chegar a alguma conclusão?
Observamos que existe uma variabilidade muito grande no balanço de carbono durante a estação chuvosa na Amazônia. Publicamos esses dados num paper em 2010. Vimos que não adianta fazer um estudo de um ou dois anos. Tem de ser de muitos anos. Essa foi a primeira lição importante que aprendi com esse estudo.

Qual é o passo seguinte do trabalho na Amazônia?
Calcular o balanço de carbono em 2012 e 2013. Já temos os dados. O ano de 2012, por exemplo, está no meio do caminho entre 2010 e 2011. Choveu absurdamente na parte noroeste e no resto foi mais seco do que em 2010. Por isso gosto do dado coletado em avião, que nos possibilita calcular a resultante [das emissões e absorções de carbono]. Se eu trabalhasse apenas com uma torre de emissão e ela estivesse no lado seco, concluiria uma coisa. Se estivesse do lado chuvoso, concluiria outra. Com o tipo de dado que usamos, podemos levar tudo em consideração e ver o que predominou. Calcular tudo e tirar a média. E, na média, 2012 foi seco na bacia toda. Em número de focos de queimada, deu bem entre 2010 e 2011.

Você começou sua carreira fora da química atmosférica. Como foi o início?
Fiz iniciação científica e mestrado em eletroquímica. Mas tinha uma frustração enorme e me perguntava para que isso serviria. Houve então a primeira reunião de química ambiental no Brasil, que o Wilson Jardim [professor da Unicamp], organizou lá em Campinas em 1989. Fui lá e me apaixonei. Era aquilo que eu queria fazer da minha vida.

Você é de onde?
Sou de Birigui, mas saí da cidade com 3 anos. Morei boa parte do tempo em Cafelândia, que tinha então 11 mil habitantes. Lá todo mundo se conhece. Por isso sou assim. Falo com todo mundo. Também falo muito com as mãos. Meus alunos dizem que, se amarrarem minhas mãos, não dou aula. O pessoal da portaria no Ipen nem pede meu crachá. É coisa de interiorano. O paulistano é capaz de estar sozinho no meio de uma multidão. De Cafelândia, minha família mudou-se para Ribeirão Preto, porque meu pai não queria que os filhos saíssem de casa para estudar. Ele escolheu uma cidade com muitas faculdades e mudou a família toda para lá. Ele era representante da Mobil Oil do Brasil. Para ele, tanto fazia estar em Cafelândia ou em Ribeirão. O engraçado é que minha irmã foi para Campinas, eu fui para a Federal de São Carlos, meu irmão foi para a FEI de São Bernardo e a única que ficou em casa foi a terceira irmã. Tive problema de saúde e precisei voltar para a casa dos meus pais antes de me formar. Me transferi para a USP de Ribeirão, mas ali o curso de química tinha quase o dobro de créditos do da federal de São Carlos. Levei mais três anos e meio para fazer o que faltava, que consumiria apenas um e meio na federal. Tudo tinha pré-requisito. Mas foi muito legal porque em São Carlos estudei bem apenas durante o primeiro ano. Depois virei militante de partido semiclandestino, dirigente estudantil. Fazia mais política que estudava. Éramos proibidos de assistir às aulas. Quando chegava a época de prova, xerocava o caderno dos amigos, varava a noite estudando e de manhã, sem ter dormido, ia lá, fazia prova e passava. Mas imagine o que ficava na cabeça. Ainda bem que praticamente tive de refazer a graduação. Que profissional seria eu se não tivesse tido que fazer a graduação de novo e aprendido a estudar muito?

Como eram as aulas na USP?
Larguei o movimento estudantil, que tinha me decepcionado muito. Queria um mundo mais justo. Mas tive um namorado que era da direção nacional do partido revolucionário. Terminei com ele, que se vingou de mim usando o poder dele. Compreendi que o problema não estava no modo de produção, comunista ou socialista, mas no nível evolutivo do ser humano. Então resolvi que a única pessoa que eu podia mudar era eu mesma. Virei zen e espiritualizada e comecei a minha revolução interna. Compreendi que não podia mudar o mundo, mas podia me tornar uma pessoa melhor. Aí comecei minha carreira, praticamente do zero, porque na USP de Ribeirão Preto é muito puxado. Se não estuda, não passa. Fiz iniciação científica, ganhei bolsa da FAPESP, fui me embrenhando e me apaixonando pela química ambiental.

Como foi seu doutorado?
Foi o que deu para fazer. Quando eu comecei o doutorado com o [Antonio Aparecido] Mozeto, era para ser sobre compostos reduzidos de enxofre, já na área de gases. Naquela época, ninguém trabalhava com gás. Só tinha um no Brasil, Antonio Horácio Miguel, que trabalhava na área, mas ele tinha ido para o exterior. Eu tinha que fazer tudo. Tinha, por exemplo, que desenvolver um padrão com tubo de permeação. Tive de desenvolver o tubo, comprar o líquido para permear e tudo mais. Quando fiz tudo funcionar e coloquei o tubo dentro do cromatógrafo, o aparelho pifou. O professor tinha comprado um cromatógrafo para medir compostos de enxofre que um professor da Universidade do Colorado tinha decidido fabricar. O projeto veio todo errado. Tinha uma cruzeta de aço inoxidável, com uma chama que, quando queima, produz hidrogênio e água. A chama esquentou a cruzeta e vazou água na fotomultiplicadora e queimou o aparelho. Durou um dia. O problema é que eu estava já havia dois anos fazendo isso e precisava de um novo aparelho para desenvolver o doutorado. O Wilson Jardim então me perguntou por que eu achava que ninguém trabalhava com gás no país. “Esse negócio é difícil! O único que trabalhava foi para fora do Brasil. Larga desse tema e vai para outra coisa”, ele me disse. Mas, a essa altura, eu já tinha perdido dois anos e era a única docente da Federal de São Carlos sem doutorado. Um professor então me disse que eu ainda estava em estágio probatório e que, se eu não fizesse um doutoradozinho de um ano para comprovar o título, não iam aprovar o estágio probatório. Saí correndo atrás de um tema que dava para fazer e que eu não me envergonhasse de ter feito. Fiz análise de sedimento de fundo de lagoas próximas ao rio Mogi-Guaçu. Apliquei análises que são usadas em trabalhos com aerossóis para descobrir a origem dos sedimentos e também datá-los. Dessa forma, dá para saber a história da ocupação da bacia dos rios. Acabei o doutorado na Federal de São Carlos e entrei para a química atmosférica, que era o que eu queria, área carente entre os pesquisadores brasileiros.

Better way to make sense of ‘Big Data?’ (Science Daily)

Date:  February 19, 2014

Source: Society for Industrial and Applied Mathematics

Summary: Vast amounts of data related to climate change are being compiled by researchers worldwide with varying climate projections. This requires combining information across data sets to arrive at a consensus regarding future climate estimates. Scientists propose a statistical hierarchical Bayesian model that consolidates climate change information from observation-based data sets and climate models.

Regional analysis for climate change assessment. Credit: Melissa Bukovsky, National Center for Atmospheric Research (NCAR/IMAGe)

Vast amounts of data related to climate change are being compiled by research groups all over the world. Data from these many and varied sources results in different climate projections; hence, the need arises to combine information across data sets to arrive at a consensus regarding future climate estimates.

In a paper published last December in the SIAM Journal on Uncertainty Quantification, authors Matthew Heaton, Tamara Greasby, and Stephan Sain propose a statistical hierarchical Bayesian model that consolidates climate change information from observation-based data sets and climate models. “The vast array of climate data — from reconstructions of historic temperatures and modern observational temperature measurements to climate model projections of future climate — seems to agree that global temperatures are changing,” says author Matthew Heaton. “Where these data sources disagree, however, is by how much temperatures have changed and are expected to change in the future. Our research seeks to combine many different sources of climate data, in a statistically rigorous way, to determine a consensus on how much temperatures are changing.” Using a hierarchical model, the authors combine information from these various sources to obtain an ensemble estimate of current and future climate along with an associated measure of uncertainty. “Each climate data source provides us with an estimate of how much temperatures are changing. But, each data source also has a degree of uncertainty in its climate projection,” says Heaton. “Statistical modeling is a tool to not only get a consensus estimate of temperature change but also an estimate of our uncertainty about this temperature change.” The approach proposed in the paper combines information from observation-based data, general circulation models (GCMs) and regional climate models (RCMs). Observation-based data sets, which focus mainly on local and regional climate, are obtained by taking raw climate measurements from weather stations and applying it to a grid defined over the globe. This allows the final data product to provide an aggregate measure of climate rather than be restricted to individual weather data sets. Such data sets are restricted to current and historical time periods. Another source of information related to observation-based data sets are reanalysis data sets in which numerical model forecasts and weather station observations are combined into a single gridded reconstruction of climate over the globe. GCMs are computer models which capture physical processes governing the atmosphere and oceans to simulate the response of temperature, precipitation, and other meteorological variables in different scenarios. While a GCM portrayal of temperature would not be accurate to a given day, these models give fairly good estimates for long-term average temperatures, such as 30-year periods, which closely match observed data. A big advantage of GCMs over observed and reanalyzed data is that GCMs are able to simulate climate systems in the future. RCMs are used to simulate climate over a specific region, as opposed to global simulations created by GCMs. Since climate in a specific region is affected by the rest of Earth, atmospheric conditions such as temperature and moisture at the region’s boundary are estimated by using other sources such as GCMs or reanalysis data. By combining information from multiple observation-based data sets, GCMs and RCMs, the model obtains an estimate and measure of uncertainty for the average temperature, temporal trend, as well as the variability of seasonal average temperatures. The model was used to analyze average summer and winter temperatures for the Pacific Southwest, Prairie and North Atlantic regions (seen in the image above) — regions that represent three distinct climates. The assumption would be that climate models would behave differently for each of these regions. Data from each region was considered individually so that the model could be fit to each region separately. “Our understanding of how much temperatures are changing is reflected in all the data available to us,” says Heaton. “For example, one data source might suggest that temperatures are increasing by 2 degrees Celsius while another source suggests temperatures are increasing by 4 degrees. So, do we believe a 2-degree increase or a 4-degree increase? The answer is probably ‘neither’ because combining data sources together suggests that increases would likely be somewhere between 2 and 4 degrees. The point is that that no single data source has all the answers. And, only by combining many different sources of climate data are we really able to quantify how much we think temperatures are changing.” While most previous such work focuses on mean or average values, the authors in this paper acknowledge that climate in the broader sense encompasses variations between years, trends, averages and extreme events. Hence the hierarchical Bayesian model used here simultaneously considers the average, linear trend and interannual variability (variation between years). Many previous models also assume independence between climate models, whereas this paper accounts for commonalities shared by various models — such as physical equations or fluid dynamics — and correlates between data sets. “While our work is a good first step in combining many different sources of climate information, we still fall short in that we still leave out many viable sources of climate information,” says Heaton. “Furthermore, our work focuses on increases/decreases in temperatures, but similar analyses are needed to estimate consensus changes in other meteorological variables such as precipitation. Finally, we hope to expand our analysis from regional temperatures (say, over just a portion of the U.S.) to global temperatures.”
 
Journal Reference:

  1. Matthew J. Heaton, Tamara A. Greasby, Stephan R. Sain. Modeling Uncertainty in Climate Using Ensembles of Regional and Global Climate Models and Multiple Observation-Based Data SetsSIAM/ASA Journal on Uncertainty Quantification, 2013; 1 (1): 535 DOI: 10.1137/12088505X

Física dos Sistemas Complexos pode prever impactos das mudanças ambientais (Fapesp)

Avaliação é de Jan-Michael Rost, pesquisador do Instituto Max Planck (foto: Nina Wagner/DWIH-SP)

19/02/2014

Elton Alisson

Agência FAPESP – Além da aplicação em áreas como a Engenharia e Tecnologias da Informação e Comunicação (TICs), a Física dos Sistemas Complexos – nos quais cada elemento contribui individualmente para o surgimento de propriedades somente observadas em conjunto – pode ser útil para avaliar os impactos de mudanças ambientais no planeta, como o desmatamento.

A avaliação foi feita por Jan-Michael Rost, pesquisador do Instituto Max-Planck para Física dos Sistemas Complexos, durante uma mesa-redonda sobre sistemas complexos e sustentabilidade, realizada no dia 14 de fevereiro no Hotel Pergamon, em São Paulo.

O encontro foi organizado pelo Centro Alemão de Ciência e Inovação São Paulo (DWIH-SP) e pela Sociedade Max Planck, em parceria com a FAPESP e o Serviço Alemão de Intercâmbio Acadêmico (DAAD), e fez parte de uma programação complementar de atividades da exposição científica Túnel da Ciência Max Planck.

“Os sistemas complexos, como a vida na Terra, estão no limiar entre a ordem e a desordem e levam um determinado tempo para se adaptar a mudanças”, disse Rost.

“Se houver grandes alterações nesses sistemas, como o desmatamento desenfreado de florestas, em um período curto de tempo, e for atravessado o limiar entre a ordem e a desordem, essas mudanças podem ser irreversíveis e colocar em risco a preservação da complexidade e a possibilidade de evolução das espécies”, afirmou o pesquisador.

De acordo com Rost, os sistemas complexos começaram a chamar a atenção dos cientistas nos anos 1950. A fim de estudá-los, porém, não era possível utilizar as duas grandes teorias que revolucionaram a Física no século 20: a da Relatividade, estabelecida por Albert Einstein (1879-1955), e da mecânica quântica, desenvolvida pelo físico alemão Werner Heisenberg (1901-1976) e outros cientistas.

Isso porque essas teorias podem ser aplicadas apenas a sistemas fechados, como os motores, que não sofrem interferência do meio externo e nos quais as reações de equilíbrio, ocorridas em seu interior, são reversíveis, afirmou Rost.

Por essa razão, segundo ele, essas teorias não são suficientes para estudar sistemas abertos, como máquinas dotadas de inteligência artificial e as espécies de vida na Terra, que interagem com o meio ambiente, são adaptativas e cujas reações podem ser irreversíveis. Por isso, elas deram lugar a teorias relacionadas à Física dos sistemas complexos, como a do caos e a da dinâmica não linear, mais apropriadas para essa finalidade.

“Essas últimas teorias tiveram um desenvolvimento espetacular nas últimas décadas, paralelamente às da mecânica clássica”, afirmou Rost.

“Hoje já se reconhece que os sistemas não são fechados, mas se relacionam com o exterior e podem apresentar reações desproporcionais à ação que sofreram. É nisso que a Engenharia se baseia atualmente para desenvolver produtos e equipamentos”, afirmou.

Categorias de sistemas complexos

De acordo com Rost, os sistemas complexos podem ser divididos em quatro categorias que se diferenciam pelo tempo de reação a uma determinada ação sofrida. A primeira delas é a dos sistemas complexos estáticos, que reagem instantaneamente a uma ação.

A segunda é a de sistemas adaptativos, como a capacidade de farejamento dos cães. Ao ser colocado na direção de uma trilha de rastros deixados por uma pessoa perdida em uma mata, por exemplo, os cães farejadores fazem movimentos de ziguezague.

Isso porque, segundo Rost, esses animais possuem um sistema de farejamento adaptativo. Isto é, ao sentir um determinado cheiro em um local, a sensibilidade olfativa do animal àquele odor diminui drasticamente e ele perde a capacidade de identificá-lo.

Ao sair do rastro em que estava, o animal recupera rapidamente a sensibilidade olfativa ao odor e é capaz de identificá-lo em uma próxima pegada. “O limiar da percepção olfativa desses animais é adaptado constantemente”, afirmou Rost.

A terceira categoria de sistemas complexos é a de sistemas autônomos, que utilizam a evolução como um sistema de adaptação e é impossível prever como será a reação a uma determinada mudança.

Já a última categoria é a de sistemas evolucionários ou transgeracionais, em que se inserem os seres humanos e outras espécies de vida na Terra, e na qual a reação a uma determinada alteração em seus sistemas de vida demora muito tempo para acontecer, afirmou Rost.

“Os sistemas transgeracionais recebem estímulos durante a vida toda e a reação de uma determinada geração não é comparável com a anterior”, disse o pesquisador.

“Tentar prever o tempo que um determinado sistema transgeracional, como a humanidade, leva para reagir a uma ação, como as mudanças ambientais, pode ser útil para assegurar a sustentabilidade do planeta”, avaliou Rost.

New kinds of maths skills needed in the future – and new educational practices (Science Daily)

Date: February 5, 2014

Source: Suomen Akatemia (Academy of Finland)

Summary: The nature of the mathematical skills required from competent citizens is changing. Gone are the days of inertly applying and performing standard calculations. The mathematical minds of the future will need to understand how different economic, social, technological and work-related processes can be mathematically represented or modeled. A project is exploring new pedagogical practices and technological environments to prepare students for the flexible use of their math skills in future environments.

The nature of the mathematical skills required from competent citizens is changing. Gone are the days of inertly applying and performing standard calculations. The mathematical minds of the future will need to understand how different economic, social, technological and work-related processes can be mathematically represented or modeled. A project included in the Academy of Finland’s research program The Future of Learning, Knowledge and Skills (TULOS) is exploring new pedagogical practices and technological environments that can prepare students for the flexible and adaptive use of their mathematical skills in future activity environments.

“Our goal is to have students be able to use their mathematical skills in a highly adaptive and flexible way. We want to promote mathematical thinking so that future minds can recognize the mathematical aspects in their environments,” says Academy Professor Erno Lehtinen from the University of Turku, the project’s principal investigator. The new pedagogical methods, digital games and other applications promoting an active awareness and mathematical reading of the surrounding environment are aimed at sparking an interest in mathematical mind games.

The educational games developed in the research project are designed to support a creative application of flexible mathematical strategies for novel situations. The idea is also to help students view natural numbers as an interlinked system and understand mathematical contents, such as equations.

Inspired by everyday phenomena

The research project will also investigate how students understand fractions and decimals and how they flexibly apply their skills in interpreting various practical phenomena. “In our previous breakthrough studies, we’ve established the role of spontaneous quantitative focusing tendencies in the development of mathematical thinking. Now, we’re trying to develop new pedagogical practices and technological environments that will inspire students to observe quantitative relationships in their everyday surroundings,” Academy Professor Lehtinen explains.

The idea is to get students to use fraction-based thinking even before they actually are taught fractions at school. According to Lehtinen, the premise is that the ability to perceive quantitative and later fraction-based relationships in varying practical situations can help students manage the difficult conceptual transition from natural numbers to fractions and bridge school learning with students’ everyday activities.

The project will make use of both basic research and applied school research. The mathematical phenomena under study will be investigated in laboratory settings using precise experiments, observations and even brain-imaging methods. On the other hand, the research methods will also involve longitudinal studies in normal school environments. The plan is to test the new pedagogical methods and games in comprehensive teaching pilots using wide-ranging national-level data.

Up the Financier: Studying the California Carbon Market (AAA, Anthropology and Environment Society Blog)

Posted on January 26, 2014

ENGAGEMENT co-editor Chris Hebdon catches up with University of Kentucky geographer Patrick Bigger.

Patrick Bigger at the Chicago Board of Trade

Patrick Bigger at the Chicago Board of Trade

How would you explain your dissertation research on the California carbon market?

At the broadest level, my research is about understanding how a brand new commodity market tied to environmental improvement is brought into the world, and then how it functions once it is in existence. Taking as a starting point Polanyi’s (1944) observation that markets are inherently social institutions, my work sorts though the social, geographical, and ideological relationships that are being mobilized in California and brought from across the world to build the world’s second largest carbon market. And those constitutive processes and practices are no small undertaking.

Making a multi-billion dollar market from scratch is a process that entails the recruitment and hiring of a small army of bureaucrats and lawyers, the creation of new trading and technology firms, the involvement of offset developers and exchange operators who had been active in other environmental commodities markets, and learning from more than fifty years of environmental economics and the intellectual work of think tanks and NGOs. There are literally tens of thousands of hours of people’s time embodied in the rule-making process, which result in texts (in the form of regulatory documents) that profoundly influence how California’s economy is performed every day. These performances range from rice farmers considering how much acreage to sow in the Sacramento Delta to former Enron power traders building new trading strategies based on intertemporal price differences of carbon futures for different compliance periods in California’s carbon market.

My work uses ethnographic methods such as participant-observation in public rule-making workshops and semi-structured interviews with regulators, industry groups, polluters, NGOs, and academics to try to recreate the key socio-geographical relationships that have had the most impact on market design and function. It’s about how regulatory and financial performances are intertwined, as events in the market (and in other financial markets, most notably the deregulated electric power market in California) are brought back to bear on rule-making, and then how rule-making impacts how the market and the associated regulated industrial processes are enacted. And the key thing is that there isn’t some isolated cabal of carbon’s ‘masters of the universe’ pulling the strings––it’s bureaucrats in cubicles, academics writing books, and offset developers planting trees out there making a market. And they’re people you can go observe and talk with.


Who are buying and selling these carbon credits?

That’s a trickier question than it seems. Most of the credits (aka allowances) are effectively created out of thin air by the California Air Resources board which then distributes them via either free allocation or by auction to anyone who requests authorization to bid. A significant proportion of those are given away directly to regulated industries to ease their transition to paying for their carbon output. Another way the auction works is that electric utilities are given almost all the credits they need to fulfill their obligation, but they are required to sell (consign) those permits in the auction, while they are typically also buyers. This is to prevent windfall profits, like what happened in the EU, for the electric utilities. The utilities must return the value of what they make selling their permits at auction to ratepayers, which they have done to the tune of $1.5 billion so far.

More to the spirit of the question though, it’s a pretty big world. Literally anyone can buy California Carbon on the Intercontinental Exchange (ICE), based in Chicago. From what I’ve been told, a lot of allowances pass through Houston because there is a major agglomeration of energy traders there, and carbon is often bundled into transactions like power purchase agreements that are traded over-the-counter (OTC). There’s an interesting division in who buys their credits where––companies that must comply with climate regulations tend to buy through the auction, while people trading for presumably speculative purposes tend to buy on the exchange. This isn’t even getting into who produces, sells, and buys carbon offsets, which is another market entirely unto itself. To attempt to be succinct, I’d say there is a ‘carbon industry’ in the same sense that Leigh Johnson (2010) talks about a ‘risk industry’; a constellation of brokers, lawyers, traders, insurers, and industrial concerns, and the size of these institutional actors range from highly specialized carbon traders to the commodities desk at transnational investment banks.


Would you be able to outline some ways your research could affect public policy? And how is it in dialogue with environmental justice literature and engaged scholarship?

There are a number of ways that my work could be taken up by policy makers, though to be clear I did not set out to write a dissertation that would become a how-to-build-a-carbon-market manual. Just being around regulators and market interlocutors has provided insights into the most challenging aspects to market creation and maintenance, like what sorts of expertise a bureaucracy needs, how regulators can encourage public participation in seemingly esoteric matters, or the order which regulator decisions need to be made. Beyond the nuts-and-bolts, there’s a fairly substantial literature on ‘fast policy transfer’ in geography that critiques the ways certain kinds of policy become wildly popular and are then plopped down anywhere regardless of geographical and political-economic context; I am interested in contributing to that literature because California’s carbon market was specifically designed to ‘travel’ through linkages with other sub-national carbon markets. I would also note that there are aspects of what I’m thinking about that problematize the entire concept of the marketization of nature in ways that would also be applicable to the broader ecosystem service literature and the NGOs and regulators who are trying to push back against that paradigm.

As far as the EJ literature is concerned, I’ll admit to having a somewhat fraught relationship. I set out to do a project on the economic geography of environmental finance, not to explicitly document the kinds injustices that environmental finance has, or has the potential, to produce. As a result some critics have accused me of being insufficiently justice-y. I’d respond by noting that my work is normative, even if it isn’t framed in the language of environmental justice; it certainly isn’t Kuhnian normal science. But EJ arguments, if they are any good, do depend on empirical grounding and I would hope that my work provides that.

At the Chicago Board of Trade.

“I’d be really happy if scholars of other markets could find parallels to my work that demonstrated that all markets, not just environmental ones, were as much about the state as they are about finance.”

Your advisor Morgan Robertson has written about “oppositional research,” and research “behind enemy lines,” drawing on his experience working inside the Environmental Protection Agency. What has oppositional research meant for you?

I think about it as using ethnographic methods to poke and prod at the logics and practices that go into building a carbon market. I think for Morgan it was more about the specific problems and opportunities of being fully embedded in an institution whose policies you want to challenge. That position of being fully ‘inside’ isn’t where I’m at right now, and it’s a difficult position to get into either because you just don’t have access, because the researcher doesn’t want to or isn’t comfortable becoming a full-fledged insider, or because academics often just don’t have time to do that sort research. It’s also contingent on what sort of conversational ethnographic tact you want to take––when you’re fully embedded you lose the option of performing the research space as a neophyte, which can be a very productive strategy. One thing that I will mention is that oppositional research is based on trust. You must have established some rapport with your research participants before you challenge them head-on, or they may just walk away and then you’ve done nothing to challenge their practices or world view, you’ve potentially sewn ill will with future research participants, and you won’t get any of the interesting information that you might have otherwise.


How about the method of “studying up”?

For starters, the logistics of ‘studying up’ (Nader 1969) are substantially different than other kinds of fieldwork. There’s lots of downtime (unless you’re in a situation where you’ve got 100% access to whatever you’re studying, e.g.  having a job as a banker or regulator) because there aren’t hearings or rule-making workshops everyday, or even every week, and the people making the market are busy white-collar people with schedules. I feel like I’ve had a really productive week if I can get 3 interviews done.

Beyond the logistics, one of the most challenging parts of studying a regulatory or financial process you’re not fully onboard with is walking the line between asking tough questions of your research participants and yet not alienating them. It has been easy for me to go in the other direction as well––even though I think carbon markets are deeply problematic and emblematic of really pernicious global trends toward the marketization of everything, I really like most of my research participants. They’re giving me their time, they tell me fascinating stories, and they’ve really bent over backward to help me connect with other people or institutions it never would have occurred to me to investigate. And that can make it tough to want to challenge them during interviews. After a while, it’s also possible to start feeling you’re on the inside of the process, at least as far as sharing a language and being part of a very small community. There aren’t many people in the world that I can have a coffee with and make jokes about one company’s consistently bizarre font choices in public comments documents. So even though the market feels almost overwhelmingly big in one sense, it’s also very intimate in another. I’m still working out how to write a trenchant political-economic critique with a much more sympathetic account of regulatory/market performance. Even many guys in the oil-refining sector are deeply concerned about climate change.


Would you ever take a job in a carbon trading firm?

Absolutely. There’s a rich literature developing that gets into the nuts and bolts of many aspects of finance, including carbon trading in the social studies of finance/cultural economics that overlaps with scholarship in critical accounting and even work coming out of some business schools. Some of those folks, like Ekaterina Svetlova (see especially 2012), have worked or done extended participant observation in the financial institutions that are being unpacked in broader literatures around performative economics and have provided useful critiques or correctives that is helping this literature to mature.

However, much of this work is subject to the same pitfalls as other work in the social studies of finance, especially the sense that scholars ‘fall in love’ with the complexity of their research topic and the ingenuity of their research participants qua coworkers and ultimately fail to link them back to meaningful critiques of the broader world. All that said, I’m not sure I’ve got the chops to work in finance. I’d be more interested in, and comfortable with, working in the environmental and economic governance realm where I could see, on a daily basis, how the logics of traders meet the logics of regulation and science.


What advice would you give to scholars who may do research on carbon markets in the future?

Get familiar with the language and logics of neoclassical economics. Really familiar. Take some classes. If you’re studying neoliberal environmental policy, it shouldn’t come as a surprise that regulation is shot through with the logics of market triumphalism at a level that just reading David Harvey (2003, 2005) probably wouldn’t prepare you for. A little engineering, or at least familiarity with engineers, wouldn’t be amiss either.

On a really pragmatic level, if you can get access, get familiar with being in an office setting if you haven’t spent much time in one. Being in a new kind of space can be really stressful and if you’re not comfortable in your surroundings you might not be getting the most out of your interviews.

If you’re studying a carbon market specifically, take the time to understand how the electricity grid works. I lost a lot of time sitting through workshops that were well over my head dealing with how the electric power industry would count its carbon emissions. I would have gotten much more out of them if I’d had even a cursory understanding of how the electricity gets from the out-of-state coal-fired power plant to my toaster.

Don’t expect to just pop in-and-out of fieldwork. Make yourself at home. Take some time to figure out what the points of tension are. That’s not to say you must do an ‘E’thnography, but taking the time at the beginning to understand the playing field will make it easier to understand the maneuvering later.

Read the specialist and general press every single day. Set up some news aggregator service to whatever market or regulation you’re looking at. It’s what your participants will be reading, and if they aren’t then you’ll really look like you know what you’re doing.


What are broad implications of your research?

I think starting to come to grips on the creation, from nothing, of a commodity market worth more than a billion dollars could have all sorts of impacts I can’t even imagine. I’d be really happy if scholars of other markets could find parallels to my work that demonstrated that all markets, not just environmental ones, were as much about the state as they are about finance, and not just in the way that Polanyi wrote about them. I’d also like to help people think through the relationship between the economic structures that people build, and then how they inhabit them through economic ideology, the performance of that ideology and their modern representation, the economic model. In some ways this is reopening the structure-agency debates that have been simmering for a long time. I also want to provide more grist for the mill in terms of unpacking variegated neoliberalisms––there are quite a few examples I’ve run across in my work where discourses about the efficiencies of markets run up against either therealpolitik of institutional inertia or perceived risks to the broader economy (which can be read as social reproduction).

In terms of policy, I hope that regulatory readers of my work will think about the relative return on investment (if I can appropriate a financial concept) in deploying market-based environmental policy as opposed to direct regulation, particularly around climate change. We’re in a situation that demands urgency to curb the worst impacts of carbon pollution, so it is of the utmost importance that the state take dramatic action, and soon. That said, wouldn’t it be interesting if this carbon market ended up accomplishing its goals? If it does, then I hope my work would take on different kinds of significance.

* * *

Harvey, David. 2003. The New Imperialism. New York: Oxford University Press.

Harvey, David. 2005. A Brief History of Neoliberalism. New York: Oxford University Press.

Johnson, Leigh. 2010. Climate Change and the Risk Industry: The Multiplication of Fear and Value. Richard Peet, Paul Robbins and Michael Watts, eds. Global Political Ecology. London: Routledge.

Nader, Laura. 1969. Up the Anthropologist: Perspectives Gained from Studying Up. Dell Hymes, ed. Reinventing Anthropology. New York: Random House.

Polanyi, Karl. 1944. The Great Transformation. Boston: Beacon.

Svetlova, Ekaterina. 2012. On the Performative Power of Financial Models. Economy and Society 41(3): 418-434.

Soap Bubbles for Predicting Cyclone Intensity? (Science Daily)

Jan. 8, 2014 — Could soap bubbles be used to predict the strength of hurricanes and typhoons? However unexpected it may sound, this question prompted physicists at the Laboratoire Ondes et Matière d’Aquitaine (CNRS/université de Bordeaux) to perform a highly novel experiment: they used soap bubbles to model atmospheric flow. A detailed study of the rotation rates of the bubble vortices enabled the scientists to obtain a relationship that accurately describes the evolution of their intensity, and propose a simple model to predict that of tropical cyclones.

Vortices in a soap bubble. (Credit: © Hamid Kellay)

The work, carried out in collaboration with researchers from the Institut de Mathématiques de Bordeaux (CNRS/université de Bordeaux/Institut Polytechnique de Bordeaux) and a team from Université de la Réunion, has just been published in the journal NatureScientific Reports.

Predicting wind intensity or strength in tropical cyclones, typhoons and hurricanes is a key objective in meteorology: the lives of hundreds of thousands of people may depend on it. However, despite recent progress, such forecasts remain difficult since they involve many factors related to the complexity of these giant vortices and their interaction with the environment. A new research avenue has now been opened up by physicists at the Laboratoire Ondes et Matière d’Aquitaine (CNRS/Université Bordeaux 1), who have performed a highly novel experiment using, of all things, soap bubbles.

The researchers carried out simulations of flow on soap bubbles, reproducing the curvature of the atmosphere and approximating as closely as possible a simple model of atmospheric flow. The experiment allowed them to obtain vortices that resemble tropical cyclones and whose rotation rate and intensity exhibit astonishing dynamics-weak initially or just after the birth of the vortex, and increasing significantly over time. Following this intensification phase, the vortex attains its maximum intensity before entering a phase of decline.

A detailed study of the rotation rate of the vortices enabled the researchers to obtain a simple relationship that accurately describes the evolution of their intensity. For instance, the relationship can be used to determine the maximum intensity of the vortex and the time it takes to reach it, on the basis of its initial evolution. This prediction can begin around fifty hours after the formation of the vortex, a period corresponding to approximately one quarter of its lifetime and during which wind speeds intensify. The team then set out to verify that these results could be applied to real tropical cyclones. By applying the same analysis to approximately 150 tropical cyclones in the Pacific and Atlantic oceans, they showed that the relationship held true for such low-pressure systems. This study therefore provides a simple model that could help meteorologists to better predict the strength of tropical cyclones in the future.

Journal Reference:

  1. T. Meuel, Y. L. Xiong, P. Fischer, C. H. Bruneau, M. Bessafi, H. Kellay. Intensity of vortices: from soap bubbles to hurricanesScientific Reports, 2013; 3 DOI:10.1038/srep03455

Walking the Walk: What Sharks, Honeybees and Humans Have in Common (Science Daily)

Dec. 23, 2013 — A research team led by UA anthropologist David Raichlen has found that the Hadza tribe’s movements while foraging can be described by a mathematical pattern called a Lévy walk — a pattern that also is found in the movements of many other animals.

The Hadza people of Tanzania wore wristwatches with GPS trackers that followed their movements while hunting or foraging. Data showed that humans join a variety of other species including sharks and honeybees in using a Lévy walk pattern while foraging. (Credit: Photo by Brian Wood/Yale University)

A mathematical pattern of movement called a Lévy walk describes the foraging behavior of animals from sharks to honey bees, and now for the first time has been shown to describe human hunter-gatherer movement as well. The study, led by University of Arizona anthropologist David Raichlen, was published today in the Proceedings of the National Academy of Sciences.

The Lévy walk pattern appears to be ubiquitous in animals, similar to the golden ratio, phi, a mathematical ratio that has been found to describe proportions in plants and animals throughout nature.

“Scientists have been interested in characterizing how animals search for a long time,” said Raichlen, an associate professor in the UA School of Anthropology, “so we decided to look at whether human hunter-gatherers use similar patterns.”

Funded by a National Science Foundation grant awarded to study co-author Herman Pontzer, Raichlen and his colleagues worked with the Hadza people of Tanzania.

The Hadza are one of the last big-game hunters in Africa, and one of the last groups on Earth to still forage on foot with traditional methods. “If you want to understand human hunter-gatherer movement, you have to work with a group like the Hadza,” Raichlen said.

Members of the tribe wore wristwatches with GPS units that tracked their movement while on hunting or foraging bouts. The GPS data showed that while the Hadza use other movement patterns, the dominant theme of their foraging movements is a Lévy walk — the same pattern used by many other animals when hunting or foraging.

“Detecting this pattern among the Hadza, as has been found in several other species, tells us that such patterns are likely the result of general foraging strategies that many species adopt, across a wide variety of contexts,” said study co-author Brian Wood, an anthropologist at Yale University who has worked with the Hadza people since 2004.

“This movement pattern seems to occur across species and across environments in humans, from East Africa to urban areas,” said Adam Gordon, study co-author and a physical anthropologist at the University at Albany, State University of New York. “It shows up all across the world in different species and links the way that we move around in the natural world. This suggests that it’s a fundamental pattern likely present in our evolutionary history.”

The Lévy walk, which involves a series of short movements in one area and then a longer trek to another area, is not limited to searching for food. Studies have shown that humans sometimes follow a Lévy walk while ambling around an amusement park. The pattern also can be used as a predictor for urban development.

“Think about your life,” Raichlen said. “What do you do on a normal day? Go to work and come back, walk short distances around your house? Then every once in a while you take these long steps, on foot, bike, in a car or on a plane. We tend to take short steps in one area and then take longer strides to get to another area.”

Following a Lévy walk pattern does not mean that humans don’t consciously decide where they are going, Raichlen said. “We definitely use memories and cues from the environment as we search,” he explained, “but this pattern seems to emerge in the process.”

In future studies, Raichlen and his colleagues hope to understand the reasons for using a Lévy walk and whether the pattern is determined by the distribution of resources in the environment.

“We’re very interested in studying why the Hadza use this pattern, what’s driving their hunting strategies and when they use this pattern versus another pattern,” said Pontzer, a member of the research team and an anthropologist at Hunter College in New York.

“We’d really like to know how and why specific environmental conditions or individual traits influence movement patterns,” added Wood.

Describing human movement patterns could also help anthropologists to understand how humans transported raw materials in the past, how our home ranges expanded and how we interact with our environment today, Raichlen noted.

“We can characterize these movement patterns across different human environments, and that means we can use this movement pattern to understand past mobility,” Raichlen said. “Also, finding patterns in nature is always fun.”

Journal Reference:

  1. D. A. Raichlen, B. M. Wood, A. D. Gordon, A. Z. P. Mabulla, F. W. Marlowe, H. Pontzer. Evidence of Levy walk foraging patterns in human hunter-gatherers.Proceedings of the National Academy of Sciences, 2013; DOI: 10.1073/pnas.1318616111

Democracy Pays (Science Daily)

Dec. 23, 2013 — In relatively large communities, individuals do not always obey the rules and often exploit the willingness of others to cooperate. Institutions such as the police are there to provide protection from misconduct such as tax fraud. But such institutions don’t just come about spontaneously because they cost money which each individual must contribute.

An interdisciplinary team of researchers led by Manfred Milinski from the Max Planck Institute for Evolutionary Biology in Plön has now used an experimental game to investigate the conditions under which institutions of this kind can nevertheless arise. The study shows that a group of players does particularly well if it has first used its own “tax money” to set up a central institution which punishes both free riders and tax evaders. However, the groups only set up institutions to penalize tax evasion if they have decided to do so by a democratic majority decision. Democracy thus enables the creation of rules and institutions which, while demanding individual sacrifice, are best for the group. The chances of agreeing on common climate protection measures around the globe are thus greater under democratic conditions.

In most modern states, central institutions are funded by public taxation. This means, however, that tax evaders must also be punished. Once such a system has been established, it is also good for the community: it makes co-existence easier and it helps maintain common standards. However, such advantageous institutions do not come about by themselves. The community must first agree that such a common punishment authority makes sense and decide what powers it should be given. Climate protection is a case in point, demonstrating that this cannot always be achieved. But how can a community agree on sensible institutions and self-limitations?

The Max Planck researchers allowed participants in a modified public goods game to decide whether to pay taxes towards a policing institution with their starting capital. They were additionally able to pay money into a common pot. The total paid in was then tripled and paid out to all participants. If taxes had been paid beforehand, free riders who did not contribute to the group pot were punished by the police. In the absence of taxation, however, there would be no police and the group would run the risk that no-one would pay into the common pot.

Police punishment of both free riders and tax evaders quickly established cooperative behavior in the experiment. If, however, tax evaders were not punished, the opposite happened and the participants avoided paying taxes. Without policing, there was no longer any incentive to pay into the group pot, so reducing the profits for the group members. Ultimately, each individual thus benefits if tax evaders are punished.

But can participants foresee this development? To find out, the scientists gave the participants a choice: they were now able to choose individually whether they joined a group in which the police also punish tax evaders. Alternatively, they could choose a group in which only those participants who did not pay into the common pot were penalized. Faced with this choice, the majority preferred a community without punishment for tax evaders — with the result that virtually no taxes were paid and, subsequently, that contributions to the group pot also fell.

In a second experimental scenario, the players were instead able to decide by democratic vote whether, for all subsequent rounds, the police should be authorized to punish tax evaders as well as free riders or only free riders. In this case, the players clearly voted for institutions in which tax evaders were also punished. “People are often prepared to impose rules on themselves, but only if they know that these rules apply to everyone,” summarizes Christian Hilbe, the lead author of the study. A majority decision ensures that all participants are equally affected by the outcome of the vote. This makes it easier to introduce rules and institutions which, while demanding individual sacrifice, are best for the group.

The participants’ profits also demonstrate that majority decisions are better: those groups which were able to choose democratically were more cooperative and so also made greater profits. “Democracy pays — in the truest sense of the word,” says Manfred Milinski. “More democracy would certainly not go amiss when it comes to the problem of global warming.”

Dois em três alunos no Brasil não sabem frações e porcentagens (O Globo)

JC e-mail 4869, de 05 de dezembro de 2013

Resultado do Pisa 2012 mostrou que estudantes brasileiros de 15 anos também demonstram dificuldade em entender gráficos simples

Dois em cada três estudantes de 15 anos no Brasil não sabem trabalhar operações matemáticas simples como frações, porcentagem e relações proporcionais. Esse é um dos resultados do Pisa 2012, divulgado nesta terça-feira (04) pela Organização para a Cooperação e Desenvolvimento (OCDE). Apesar de ser um dos países que mais apresentou avanços na matéria na última década, o Brasil ainda ocupa a 57ª posição dentre 65 nações avaliadas, com 391 pontos. No topo da tabela, a província chinesa de Xangai, aos 613 pontos.

Para comparar o desempenho de diversos sistemas de ensino pelo mundo, a OCDE determinou critérios comuns a serem avaliados, porém balanceados segundo a formação sócio-econômica dos participantes. A parte de Matemática, que foi o foco da prova do Pisa 2012, quis medir a capacidade dos estudantes para formular, empregar e interpretar a matemática em uma variedade de contextos do dia a dia, na resolução de problemas. Por isso, para os avaliadores, não basta que um aluno saiba somar e dividir, por exemplo, mas sim colocá-los em prática.

A partir da pontuação na prova, a OCDE estabeleceu seis níveis de rendimento, onde o seis é o nível máximo de conhecimento. No Brasil, mais de dois terços dos quase 20 mil estudantes que fizeram a prova ficaram na faixa que vai até o nível dois, ou seja, bem abaixo na tabela. Nesse nível, eles conseguem interpretar e reconhecer situações em contextos que exigem apenas a inferência direta, além de fazerem interpretação literal dos resultados. Mas para por aí. Mais de 65% dos estudantes brasileiros não conseguem analisar um gráfico ou uma situação do cotidiano e traduzir o problema para modelos matemáticos.

Como exemplo para explicar sua metodologia, a OCDE mostrou uma questão da prova de Matemática onde o aluno se depara um gráfico em plano cartesiano contendo quatro grupos de rock, segundo a quantidade de CDs vendidos por mês. A partir daí, foi pedido para que o aluno identificasse em qual mês uma determinada banda vendeu mais discos do que outra, tarefa considerada complexa para muitos dos brasileiros.

Para o diretor-adjunto do Instituto de Matemática Pura e Aplicada (Impa), Cláudio Landim, o Brasil tem o que comemorar, sobretudo na parte mais baixa da tabela, os 10% piores, que melhoraram 100 pontos de 2003 a 2012. No entanto, Landim reconheceu que o país vive ainda uma situação “precária”:

– Não saber usar frações ou porcentagens é cada vez mais grave, pois vivemos num mundo tecnológico, onde dominar essa operações é cada vez mais imperativo – argumenta o diretor do Impa.

Cláudio Landim também ressaltou que grande parte da dificuldade dos alunos em entender gráficos simples ou tabelas vem da deficiência em Leitura, outra área analisada pelo Pisa:

– Quando participamos da Olimpíada Brasileira de Matemática, percebemos que a dificuldade do aluno é compreender o que está sendo perguntado. É compreensão do texto, puramente isso. Uma grande parcela não consegue responder porque não entende o que está lendo. A compreensão textual tem impacto significativo na Matemática.

Com 391 pontos em Matemática, o Brasil ficou abaixo de vizinhos como o Chile, e de outros emergentes como a Turquia, 44ª no ranking, aos 448 pontos. Como para a OCDE, 41 pontos na tabela equivaleria a um ano de estudo formal, os alunos brasileiros teriam que estudar um ano a mais para alcançar o nível de seus colegas turcos, ou quase três anos para se aproximarem do nível do Vietnã, que ficou em 17º lugar, com 511 pontos. Já se a comparação for com os estudantes de Xangai, líderes na prova, o Brasil deveria recuperar cinco anos de atraso.

Preocupação na indústria
O desempenho do Brasil foi considerado preocupante pelo gerente-executivo de Estudo e Prospectiva da Confederação Nacional da Indústria (CNI), Luiz Caruso. Segundo ele, as baixas na educação desses jovens podem impactar sobre o desenvolvimento da indústria no país.
– Quando estiverem no mercado de trabalho, eles terão mais dificuldade para absorver e trabalhar com novas tecnologias, o que impacta diretamente a produtividade e a competitividade do país. Sem uma educação básica de boa qualidade, a gente não tem cidadão e nem um bom trabalhador – avalia Caruso.

De acordo com Caruso, a realidade também merece atenção, se levado em consideração as oportunidades em educação profissionalizante, que vêm recebendo investimentos do governo:

– É difícil trabalhar com esses jovens, pois a educação profissional requer maior grau de complexidade. E como os jovens apresentam defasagem em disciplinas básicas como matemática e português, acaba sendo necessário sanar essas dificuldades durante o ensino médio, ocupando uma parte da carga horária que poderia ser destinada a fins mais específicos – afirma.

(Leonardo Vieira/O Globo)
http://oglobo.globo.com/educacao/dois-em-tres-alunos-no-brasil-nao-sabem-fracoes-porcentagens-10968622#ixzz2mbsK9vAX

Simple Mathematical Formula Describes Human Struggles (Science Daily)

Dec. 12, 2013 — Would you believe that a broad range of human struggles can be understood by using a mathematical formula? From child-parent struggles to cyber-attacks and civil unrest, they can all be explained with a simple mathematical expression called a “power-law.”

The manner in which a baby’s cries escalate against its parent is comparable to the way riots in Poland escalated in the lead-up to the collapse of the Soviet Union. (Credit: © erllre / Fotolia)

In a sort of unified theory of human conflict, scientists have found a way to mathematically describe the severity and timing of human confrontations that affect us personally and as a society.

For example, the manner in which a baby’s cries escalate against its parent is comparable to the way riots in Poland escalated in the lead-up to the collapse of the Soviet Union. It comes down to the fact that the perpetrator in both cases (e.g. baby, rioters) adapts quickly enough to escalate its attacks against the larger, but more sluggish entity (e.g. parent, government), who is unable, or unwilling, to respond quickly enough to satisfy the perpetrator, according to a new study published in Nature‘s Scientific Reports.

“By picking out a specific baby (and parent), and studying what actions of the parent make the child escalate or de-escalate its cries, we can understand better how to counteract cyber-attacks against a particular sector of U.S. cyber infrastructure, or how an outbreak of civil unrest in a given location (e.g. Syria) will play out, following particular government interventions,” says Neil Johnson, professor of physics and the head of the interdisciplinary research group in Complexity, at the College of Arts and Sciences at the University of Miami (UM) and corresponding author of the study.

Respectively, the study finds some remarkable similarities between seemingly disconnected confrontations. For instance:

  • The escalation of violent attacks in Magdalena, Colombia — though completely cut off from the rest of the world — is actually representative of all modern wars. Meanwhile, the conflict in Sierra Leone, Africa, has exactly the same dynamics as the narco-guerilla war in Antioquia, Colombia.
  • The pattern of attacks by predatory traders against General Electric (GE) stock is equivalent to the pattern of cyber-attacks against the U.S. hi-tech electronics sector by foreign groups, which in turn mimics specific infants and parents.
  • New insight into the controversial ‘Bloody Sunday’ attack by the British security forces, against civilians, on January 30,1972, reveals that Bloody Sunday appears to be the culmination of escalating Provisional Irish Republican Army attacks, not their trigger, hence raising new questions about its strategic importance.

The findings show that this mathematical formula of the form AB-C is a valuable tool that can be applied to make quantitative predictions concerning future attacks in a given confrontation. It can also be used to create an intervention strategy against the perpetrators and, more broadly, as a quantitative starting point for cross-disciplinary theorizing about human aggression, at the individual and group level, in both real and online worlds.

Journal Reference:

  1. Neil F. Johnson, Pablo Medina, Guannan Zhao, Daniel S. Messinger, John Horgan, Paul Gill, Juan Camilo Bohorquez, Whitney Mattson, Devon Gangi, Hong Qi, Pedro Manrique, Nicolas Velasquez, Ana Morgenstern, Elvira Restrepo, Nicholas Johnson, Michael Spagat, Roberto Zarama. Simple mathematical law benchmarks human confrontations.Scientific Reports, 2013; 3 DOI: 10.1038/srep03463

One Percent of Population Responsible for 63% of Violent Crime, Swedish Study Reveals (Science Daily)

Dec. 6, 2013 — The majority of all violent crime in Sweden is committed by a small number of people. They are almost all male (92%) who early in life develops violent criminality, substance abuse problems, often diagnosed with personality disorders and commit large number non-violent crimes. These are the findings of researchers at Sahlgrenska Academy who have examined 2.5 million people in Swedish criminal and population registers.

In this study, the Gothenburg researchers matched all convictions for violent crime in Sweden between 1973 and 2004 with nation-wide population register for those born between 1958 to 1980 (2.5 million).

Of the 2.5 million individuals included in the study, 4 percent were convicted of at least one violent crime, 93,642 individuals in total. Of these convicted at least once, 26 percent were re-convicted three or more times, thus resulting in 1 percent of the population (23,342 individuals) accounting for 63 percent of all violent crime convictions during the study period.

“Our results show that 4 percent of those who have three or more violent crime convictions have psychotic disorders, such as schizophrenia and bipolar disorder. Psychotic disorders are twice as common among repeat offenders as in the general population, but despite this fact they constitute a very small proportion of the repeat offenders,” says Örjan Falk, researcher at Sahlgrenska Academy.

One finding the Gothenburg researchers present is that “acts of insanity” that receive a great deal of mass media coverage, committed by someone with a severe psychiatric disorder, are not responsible for the majority of violent crimes.

According to the researchers, the study’s results are important to crime prevention efforts.

“This helps us identify which individuals and groups in need of special attention and extra resources for intervention. A discussion on the efficacy of punishment (prison sentences) for this group is needed as well, and we would like to initiate a debate on what kind of criminological and medical action that could be meaningful to invest in,” says Örjan Falk.

Studies like this one are often used as arguments for more stringent sentences and US principles like “three strikes and you’re out.” What are your views on this?

“Just locking those who commit three or more violent crimes away for life is of course a compelling idea from a societal protective point of view, but could result in some undesirable consequences such as an escalation of serious violence in connection with police intervention and stronger motives for perpetrators of repeat violence to threaten and attack witnesses to avoid life sentences. It is also a fact that a large number of violent crimes are committed inside the penal system.”

“And from a moral standpoint it would mean that we give up on these, in many ways, broken individuals who most likely would be helped by intensive psychiatric treatments or other kind of interventions. There are also other plausible alternatives to prison for those who persistently relapse into violent crime, such as highly intensive monitoring, electronic monitoring and of course the continuing development of specially targeted treatment programs. This would initially entail a higher cost to society, but over a longer period of time would reduce the total number of violent crimes and thereby reduce a large part of the suffering and costs that result from violent crimes,” says Örjan Falk.

“I first and foremost advocate a greater focus on children and adolescents who exhibit signs of developing violent behavior and who are at the risk of later becoming repeat offenders of violent crime.”

Journal Reference:

  1. Örjan Falk, Märta Wallinius, Sebastian Lundström, Thomas Frisell, Henrik Anckarsäter, Nóra Kerekes. The 1 % of the population accountable for 63 % of all violent crime convictionsSocial Psychiatry and Psychiatric Epidemiology, 2013; DOI: 10.1007/s00127-013-0783-y

The Oracle of the T Cell (Science Daily)

Dec. 5, 2013 — A platform that simulates how the body defends itself: The T cells of the immune system decide whether to trigger an immune response against foreign substances.

The virtual T cell allows an online simulation of the response of this immune cell to external signals. (Credit: University of Freiburg)

Since December 2013, scientists from around the world can use the “virtual T cell” to test for themselves what happens in the blood cell when receptor proteins are activated on the surface. Prof. Dr. Wolfgang Schamel from the Institute of Biology III, Facutly of Biology, the Cluster of Excellence BIOSS Centre for Biological Signalling Studies and the Center of Chromic Immunodeficiency of the University of Freiburg is coordinating the European Union-funded project SYBILLA, “Systems Biology of T-Cell Activation in Health and Disease.” This consortium of 17 partners from science and industry has been working since 2008 to understand the T cell as a system. Now the findings of the project are available to the public on an interactive platform. Simulating the signaling pathways in the cell enables researchers to develop new therapeutic approaches for cancer, autoimmune diseases, and infectious diseases.

The T cell is activated by vaccines, allergens, bacteria, or viruses. The T cell receptor identifies these foreign substances and sets off intracellular signaling cascades. This response is then modified by many further receptors. In the end, the network of signaling proteins results in cell division, growth, or the release of messengers that guide other cells of the immune system. The network initiates the attack on the foreign substances. Sometimes, however, the process of activation goes awry: The T cells mistakenly attack the body’s own cells, as in autoimmune diseases, or they ignore harmful cells like cancer cells.

The online platform developed by Dr. Utz-Uwe Haus and Prof. Dr. Robert Weismantel from the Department of Mathematics of ETH Zurich in collaboration with Dr. Jonathan Lindquist and Prof. Dr. Burkhart Schraven from the Institute of Molecular and Clinical Immunology of the University of Magdeburg and the Helmholtz Center for Infection Research in Braunschweig allows researchers to click through the signaling network of the T cells: Users can switch on twelve receptors, including the T cell receptor, identify the signals on the surface of other cells, or bind messengers.

The mathematical model then calculates the behavior of the network out of the 403 elements in the system. The result is a combination of the activity of 52 proteins that predict what will happen with the cell: They change the way in which the DNA is read and thus also that which the cell produces. Now researchers can find weak points for active substances that could be used to treat immune diseases or cancer by switching on and off particular signals in the model. Every protein and every interaction between proteins is described in detail in the network, backed up with references to publications. In addition, users can even extend the model themselves to include further signaling proteins.

No Qualms About Quantum Theory (Science Daily)

Nov. 26, 2013 — A colloquium paper published inThe European Physical Journal D looks into the alleged issues associated with quantum theory. Berthold-Georg Englert from the National University of Singapore reviews a selection of the potential problems of the theory. In particular, he discusses cases when mathematical tools are confused with the actual observed sub-atomic scale phenomena they are describing. Such tools are essential to provide an interpretation of the observations, but cannot be confused with the actual object of studies.

The author sets out to demystify a selected set of objections targeted against quantum theory in the literature. He takes the example of Schrödinger’s infamous cat, whose vital state serves as the indicator of the occurrence of radioactive decay, whereby the decay triggers a hammer mechanism designed to release a lethal substance. The term ‘Schrödinger’s cat state’ is routinely applied to superposition of so-called quantum states of a particle. However, this imagined superposition of a dead and live cat has no reality. Indeed, it confuses a physical object with its description. Something as abstract as the wave function − which is a mathematical tool describing the quantum state − cannot be considered a material entity embodied by a cat, regardless of whether it is dead or alive.

Other myths debunked in this paper include the provision of proof that quantum theory is well defined, has a clear interpretation, is a local theory, is not reversible, and does not feature any instant action at a distance. It also demonstrates that there is no measurement problem, despite the fact that the measure is commonly known to disturb the system under measurement. Hence, since the establishment of quantum theory in the 1920s, its concepts are now clearer, but its foundations remain unchanged.

Journal Reference:

  1. Berthold-Georg Englert. On quantum theoryThe European Physical Journal D, 2013; 67 (11) DOI: 10.1140/epjd/e2013-40486-5

Selecting Mathematical Models With Greatest Predictive Power: Finding Occam’s Razor in an Era of Information Overload (Science Daily)

Nov. 20, 2013 — How can the actions and reactions of proteins so small or stars so distant they are invisible to the human eye be accurately predicted? How can blurry images be brought into focus and reconstructed?

A new study led by physicist Steve Pressé, Ph.D., of the School of Science at Indiana University-Purdue University Indianapolis, shows that there may be a preferred strategy for selecting mathematical models with the greatest predictive power. Picking the best model is about sticking to the simplest line of reasoning, according to Pressé. His paper explaining his theory is published online this month in Physical Review Letters.

“Building mathematical models from observation is challenging, especially when there is, as is quite common, a ton of noisy data available,” said Pressé, an assistant professor of physics who specializes in statistical physics. “There are many models out there that may fit the data we do have. How do you pick the most effective model to ensure accurate predictions? Our study guides us towards a specific mathematical statement of Occam’s razor.”

Occam’s razor is an oft cited 14th century adage that “plurality should not be posited without necessity” sometimes translated as “entities should not be multiplied unnecessarily.” Today it is interpreted as meaning that all things being equal, the simpler theory is more likely to be correct.

A principle for picking the simplest model to answer complex questions of science and nature, originally postulated in the 19th century by Austrian physicist Ludwig Boltzmann, had been embraced by the physics community throughout the world. Then, in 1998, an alternative strategy for picking models was developed by Brazilian Constantino Tsallis. This strategy has been widely used in business (such as in option pricing and for modeling stock swings) as well as scientific applications (such as for evaluating population distributions). The new study finds that Boltzmann’s strategy, not the 20th century alternative, assures that the models picked are the simplest and most consistent with data.

“For almost three decades in physics we have had two main competing strategies for picking the best model. We needed some resolution,” Pressé said. “Even as simple an experiment as flipping a coin or as complex an enterprise as understanding functions of proteins or groups of proteins in human disease need a model to describe them. Simply put, we need one Occam’s razor, not two, when selecting models.”

In addition to Pressé, co-authors of “Nonadditive entropies yield probability distributions with biases not warranted by the data” are Kingshuk Ghosh of the University of Denver, Julian Lee of Soongsil University, and Ken A. Dill of Stony Brook University.

Pressé is also the first author of a companion paper, “Principles of maximum entropy and maximum caliber in statistical physics” published in the July-September issue of the Reviews of Modern Physics.

Will U.S. Hurricane Forecasting Models Catch Up to Europe’s? (National Geographic)

Photo of a satellite view of Hurricane Sandy on October 28, 2012.

A satellite view of Hurricane Sandy on October 28, 2012.

Photograph by Robert Simmon, ASA Earth Observatory and NASA/NOAA GOES Project Science team

Willie Drye

for National Geographic

Published October 27, 2013

If there was a bright spot amid Hurricane Sandy’s massive devastation, including 148 deaths, at least $68 billion in damages, and the destruction of thousands of homes, it was the accuracy of the forecasts predicting where the storm would go.

Six days before Sandy came ashore one year ago this week—while the storm was still building in the Bahamas—forecasters predicted it would make landfall somewhere between New Jersey and New York City on October 29.

They were right.

Sandy, which had by then weakened from a Category 2 hurricane to an unusually potent Category 1, came ashore just south of Atlantic City, a few miles from where forecasters said it would, on the third to last day of October.

“They were really, really excellent forecasts,” said University of Miami meteorologist Brian McNoldy. “We knew a week ahead of time that something awful was going to happen around New York and New Jersey.”

That knowledge gave emergency management officials in the Northeast plenty of time to prepare, issuing evacuation orders for hundreds of thousands of residents in New Jersey and New York.

Even those who ignored the order used the forecasts to make preparations, boarding up buildings, stocking up on food and water, and buying gasoline-powered generators.

But there’s an important qualification about the excellent forecasts that anticipated Sandy’s course: The best came from a European hurricane prediction program.

The six-day-out landfall forecast arrived courtesy of a computer program known as the European Centre for Medium-range Weather Forecasting (ECMWF), which is based in England.

Most of the other models in use at the National Hurricane Center in Miami, including the U.S. Global Forecast System (GFS), didn’t start forecasting a U.S. landfall until four days before the storm came ashore. At the six-day-out mark, that model and others at the National Hurricane Center had Sandy veering away from the Atlantic Coast, staying far out at sea.

“The European model just outperformed the American model on Sandy,” says Kerry Emanuel, a meteorologist at Massachusetts Institute of Technology.

Now, U.S. weather forecasting programmers are working to close the gap between the U.S. Global Forecast System and the European model.

There’s more at stake than simple pride. “It’s to our advantage to have two excellent models instead of just one,” says McNoldy. “The more skilled models you have running, the more you know about the possibilities for a hurricane’s track.”

And, of course, the more lives you can save.

Data, Data, Data

The computer programs that meteorologists rely on to predict the courses of storms draw on lots of data.

U.S. forecasting computers and their European counterparts rely on radar that provides information on cloud formations and the rotation of a storm, on orbiting satellites that show precisely where a storm is, and on hurricane-hunter aircraft that fly into storms to collect wind speeds, barometric pressure readings, and water temperatures.

Hundreds of buoys deployed along the Atlantic and Gulf coasts, meanwhile, relay information about the heights of waves being produced by the storm.

All this data is fed into computers at the National Centers for Environmental Prediction at Camp Springs, Maryland, which use it to run the forecast models. Those computers, linked to others at the National Hurricane Center, translate the computer models into official forecasts.

The forecasters use data from all computer models—including the ECMWF—to make their forecasts four times daily.

Forecasts produced by various models often diverge, leaving plenty of room for interpretation by human forecasters.

“Usually, it’s kind of a subjective process as far as making a human forecast out of all the different computer runs,” says McNoldy. “The art is in the interpretation of all of the computer models’ outputs.”

There are two big reasons why the European model is usually more accurate than U.S. models. First, the European Centre for Medium-range Weather Forecasting model is a more sophisticated program that incorporates more data.

Second, the European computers that run the program are more powerful than their U.S. counterparts and are and able to do more calculations more quickly.

“They don’t have any top-secret things,” McNoldy said. “Because of their (computer) hardware, they can implement more sophisticated code.”

A consortium of European nations began developing the ECMWF in 1976, and the model has been fueled by a series of progressively more powerful supercomputers in England. It got a boost when the European Union was formed in 1993 and member states started contributing taxes for more improvements.

The ECMWF and the GFS are the two primary models that most forecasters look at, said Michael Laca, producer of TropMet, a website that focuses on hurricanes and other severe weather events.

Laca said that forecasts and other data from the ECMWF are provided to forecasters in the U.S. and elsewhere who pay for the information.

“The GFS, on the other hand, is freely available to everyone, and is funded—or defunded—solely through (U.S.) government appropriations,” Laca said.

And since funding for U.S. research and development is subject to funding debates in Congress, U.S. forecasters are “in a hard position to keep pace with the ECMWF from a research and hardware perspective,” Laca said.

Hurricane Sandy wasn’t the first or last hurricane for which the ECMWF was the most accurate forecast model. It has consistently outperformed the GFS and four other U.S. and Canadian forecasting models.

Greg Nordstrom, who teaches meteorology at Mississippi State University in Starkville, said the European model provided much more accurate forecasts for Hurricane Isaac in August 2012 and for Tropical Storm Karen earlier this year.

“This doesn’t mean the GFS doesn’t beat the Euro from time to time,” he says.  “But, overall, the Euro is king of the global models.”

McNoldy says the European Union’s generous funding of research and development of their model has put it ahead of the American version. “Basically, it’s a matter of resources,” he says. “If we want to catch up, we will. It’s important that we have the best forecasting in the world.”

European developers who work on forecasting software have also benefited from better cooperation between government and academic researchers, says MIT’s Emanuel.

“If you talk to (the National Oceanic and Atmospheric Administration), they would deny that, but there’s no real spirit of cooperation (in the U.S.),” he says. “It’s a cultural problem that will not get fixed by throwing more money at the problem.”

Catching Up Amid Chaos

American computer models’ accuracy in forecasting hurricane tracks has improved dramatically since the 1970s. The average margin of error for a three-day forecast of a hurricane’s track has dropped from 500 miles in 1972 to 115 miles in 2012.

And NOAA is in the middle of a ten-year program intended to dramatically improve the forecasting of hurricanes’ tracks and their likelihood to intensify, or become stronger before landfall.

One of the project’s centerpieces is the Hurricane Weather Research and Forecasting model, or HWRF. In development since 2007, it’s similar to the ECMWF in that it will incorporate more data into its forecasting, including data from the GFS model.

Predicting the likelihood that a hurricane will intensify is difficult. For a hurricane to gain strength, it needs humid air, seawater heated to at least 80ºF, and no atmospheric winds to disrupt its circulation.

In 2005, Hurricane Wilma encountered those perfect conditions and in just 30 hours strengthened from a tropical storm with peak winds of about 70 miles per hour to the most powerful Atlantic hurricane on record, with winds exceeding 175 miles per hour.

But hurricanes are as delicate as they are powerful. Seemingly small environmental changes, like passing over water that’s slightly cooler than 80ºF or ingesting dryer air, can rapidly weaken a storm. And the environment is constantly changing.

“Over the next five years, there may be some big breakthrough to help improve intensification forecasting,” McNoldy said. “But we’re still working against the basic chaos in the atmosphere.”

He thinks it will take at least five to ten years for the U.S. to catch up with the European model.

MIT’s Emanuel says three factors will determine whether more accurate intensification forecasting is in the offing: the development of more powerful computers that can accommodate more data, a better understanding of hurricane intensity, and whether researchers reach a point at which no further improvements to intensification forecasting are possible.

Emanuel calls that point the “prediction horizon” and says it may have already been reached: “Our level of ignorance is still too high to know.”

Predictions and Responses

Assuming we’ve not yet hit that point, better predictions could dramatically improve our ability to weather hurricanes.

The more advance warning, the more time there is for those who do choose to heed evacuation orders. Earlier forecasting would also allow emergency management officials more time to provide transportation for poor, elderly, and disabled people unable to flee on their own.

More accurate forecasts would also reduce evacuation expenses.

Estimates of the cost of evacuating coastal areas before a hurricane vary considerably, but it’s been calculated that it costs $1 million for every mile of coastline evacuated. That includes the cost of lost commerce, wages and salaries by those who leave, and the costs of actual evacuating, like travel and shelter.

Better forecasts could reduce the size of evacuation areas and save money.

They would also allow officials to get a jump on hurricane response.  The Federal Emergency Management Administration tries to stockpile relief supplies far enough away from an expected hurricane landfall to avoid damage from the storm but near enough so that the supplies can quickly be moved to affected areas afterwards.

More reliable landfall forecasts would help FEMA position recovery supplies closer to where they’ll be.

Whatever improvements are made, McNoldy warns that forecasting will never be foolproof. However dependable, he said, “Models will always be imperfect.”

How Scott Collis Is Harnessing New Data To Improve Climate Models (Popular Science)

The former ski bum built open-access tools that convert raw data from radar databases into formats that climate modelers can use to better predict climate change.

By Veronique Greenwood and Valerie Ross

Posted 10.16.2013 at 3:00 pm

Scott Collis (by Joel Kimmel)

Each year, Popular Science seeks out the brightest young scientists and engineers and names them the Brilliant Ten. Like the 110 honorees before them, the members of this year’s class are dramatically reshaping their fields–and the future. Some are tackling pragmatic questions, like how to secure the Internet, while others are attacking more abstract ones, like determining the weather on distant exoplanets. The common thread between them is brilliance, of course, but also impact. If the Brilliant Ten are the faces of things to come, the world will be a safer, smarter, and brighter place.–The Editors

Scott Collis

Argonne National Laboratory

Achievement

Harnessing new data to improve climate models

Clouds are one of the great challenges for climate scientists. They play a complex role in the atmosphere and in any potential climate-change scenario. But rudimentary data has simplified their role in simulations, leading to variability among climate models. Scott Collis discovered a way to add accuracy to forecasts of future climate—by tapping new sources of cloud data.

Collis has extensive experience watching clouds, first as a ski bum during grad school in Australia and then as a professional meteorologist. But when he took a job at the Centre for Australian Weather and Climate Research, he realized there was an immense source of cloud data that climate modelers weren’t using: the information collected for weather forecasts. So Collis took on the gargantuan task of building open-access tools that convert the raw data from radar databases into formats that climate modelers can use. In one stroke, Collis unlocked years of weather data. “We were able to build such robust algorithms that they could work over thousands of radar volumes without human intervention,” says Collis.

When the U.S. Department of Energy caught wind of his project, it recruited him to work with a new radar network designed to collect high-quality cloud data from all over the globe. The network, the largest of its kind, isn’t complete yet, but already the data that Collis and his collaborators have collected is improving next-generation climate models.

Click here to see more from our annual celebration of young researchers whose innovations will change the world. This article originally appeared in the October 2013 issue of Popular Science.