Arquivo da tag: Matemática

O futuro da ciência está na colaboração (Valor Econômico)

JC e-mail 4376, de 01 de Novembro de 2011.

Texto de Michael Nielsen publicado no The Wall Street Journal e divulgado pelo Valor Econômico.

Um matemático da Universidade de Cambridge chamado Tim Gowers decidiu em janeiro de 2009 usar seu blog para realizar um experimento social inusitado. Ele escolheu um problema matemático difícil e tentou resolvê-lo abertamente, usando o blog para apresentar suas ideias e como estava progredindo. Ele convidou todo mundo para contribuir com ideias, na esperança de que várias mentes unidas seriam mais poderosas que uma. Ele chamou o experimento de Projeto Polímata (“Polymath Project”).

Quinze minutos depois de Gowers abrir o blog para discussão, um matemático húngaro-canadense publicou um comentário. Quinze minutos depois, um professor de matemática do ensino médio dos Estados Unidos entrou na conversa. Três minutos depois disso, o matemático Terence Tao, da Universidade da Califórnia em Los Angeles, também comentou. A discussão pegou fogo e em apenas seis semanas o problema foi solucionado.

Embora tenham surgido outros desafios e os colaboradores dessa rede nem sempre tenham encontrado todas as soluções, eles conseguiram criar uma nova abordagem para solucionar problemas. O trabalho deles é um exemplo das experiências com ciência colaborativa que estão sendo feitas para estudar desde de galáxias até dinossauros.

Esses projetos usam a internet como ferramenta cognitiva para amplificar a inteligência coletiva. Essas ferramentas são um meio de conectar as pessoas certas com os problemas certos na hora certa, ativando o que é um conhecimento apenas latente.

A colaboração em rede tem o potencial de acelerar extraordinariamente o número de descobertas da ciência como um todo. É provável que assistiremos a uma mudança mais fundamental na pesquisa científica nas próximas décadas do que a ocorrida nos últimos três séculos.

Mas há obstáculos grandes para alcançar essa meta. Embora pareça natural que os cientistas adotem essas novas ferramentas de descobrimento, na verdade eles têm demonstrado uma inibição surpreendente. Iniciativas como o Projeto Polímata continuam sendo exceção, não regra.

Considere a simples ideia de compartilhar dados científicos on-line. O melhor exemplo disso é o projeto do genoma humano, cujos dados podem ser baixados por qualquer um. Quando se lê no noticiário que um certo gene foi associado a alguma doença, é praticamente certo que é uma descoberta possibilitada pela política do projeto de abrir os dados.

Apesar do valor enorme de divulgar abertamente os dados, a maioria dos laboratórios não faz um esforço sistemático para compartilhar suas informações com outros cientistas. Como me disse um biólogo, ele estava “sentado no genoma” de uma nova espécie inteira há mais de um ano. Uma espécie inteira! Imagine as descobertas cruciais que outros cientistas poderiam ter feito se esse genoma tivesse sido carregado num banco de dados aberto.

Por que os cientistas não gostam de compartilhar?

Se você é um cientista buscando um emprego ou financiamento de pesquisa, o maior fator para determinar seu sucesso será o número de publicações científicas que já conseguiu. Se o seu histórico for brilhante, você se dará bem. Se não for, terá problemas. Então você dedica seu cotidiano de trabalho à produção de artigos para revistas acadêmicas.

Mesmo que ache pessoalmente que seria muito melhor para a ciência como um todo se você organizasse e compartilhasse seus dados na internet, é um tempo que o afasta do “verdadeiro” trabalho de escrever os artigos. Compartilhar dados não é algo a que seus colegas vão dar crédito, exceto em poucas áreas.

Há outras áreas em que os cientistas ainda estão atrasados no uso das ferramentas on-line. Um exemplo são os “wikis” criadas por pioneiros corajosos em assuntos como computação quântica, teoria das cordas e genética (um wiki permite o compartilhamento e edição colaborativa de um conjunto de informações interligadas, e o site Wikipedia é o mais conhecido deles).

Os wikis especializados podem funcionar como obras de referência atualizadas sobre as pesquisas mais recentes de um campo, como se fossem livros didáticos que evoluem ultrarrápido. Eles podem incluir descrições de problemas científicos importantes que ainda não foram resolvidos e podem servir de ferramenta para encontrar soluções.

Mas a maioria desses wikis não deu certo. Eles têm o mesmo problema que o compartilhamento de dados: mesmo se os cientistas acreditarem no valor da colaboração, sabem que escrever um único artigo medíocre fará muito mais por suas carreiras. O incentivo está completamente errado.

Para a ciência em rede alcançar seu potencial, os cientistas precisam abraçar e recompensar o compartilhamento aberto de todos os conhecimentos científicos, não só o publicado nas revistas acadêmicas tradicionais. A ciência em rede precisa ser aberta.

Michael Nielsen é um dos pioneiros da computação quântica e escreveu o livro “Reinventing Discovery: The New Era of Networked Science” (Reinventando a Descoberta: A Nova Era da Ciência em Rede, sem tradução para o português), de onde esse texto foi adaptado.

NSF seeks cyber infrastructure to make sense of scientific data (Federal Computer Week)

By Camille Tuutti, Oct 04, 2011

The National Science Foundation has tapped a research team at the University of North Carolina-Chapel Hill to develop a national data infrastructure that would help future scientists and researchers manage the data deluge, share information and fuel innovation in the scientific community.

The UNC group will lead the DataNet Federation Consortium, which includes seven universities. The infrastructure that the consortium will try to create would support collaborative multidisciplinary research and will “democratize access to information among researchers and citizen scientists alike,” said Rob Pennington, program director in NSF’s Office of Cyberinfrastructure.

“It means researchers on the cutting edge have access to new, more extensive, multidisciplinary datasets that will enable breakthroughs and the creation of new fields of science and engineering,” he added.

The effort would be a “significant step in the right direction” in solving some of the key problems researchers run into, said Stan Ahalt, director at the Renaissance Computing Institute at UNC-Chapel Hill, which federates the consortium’s data repositories to enable cross-disciplinary research. One of the issues researchers today grapple with is how to best manage data in a way that maximizes its utility to the scientific community, he said. Storing massive quantities of data and the lack of well-designed methods that allow researchers to use unstructured and structured data simultaneously are additional obstacles for researchers, Ahalt added.

The national data infrastructure may not solve everything immediately, he said, “but it will give us a platform for start working meticulously on more long-term rugged solutions or robust solutions.”

DFC will use iRODS, the integrated Rule Oriented Data System, to implement a data management infrastructure. Multiple federal agencies are already using the technology: the NASA Center for Climate Simulation, for example, imported a Moderate Resolution Imaging Spectroradiometer satellite image dataset onto the environment so academic researchers would have access, said Reagan Moore, principal investigator for the Data Intensive Cyber Environments research group at UNC-Chapel Hill that leads the consortium.

It’s very typical for a scientific community to develop a set of practices around a particular methodology of collecting data, Ahalt explained. For example, hydrologists know where their censors are and what those mean from a geographical perspective. Those hydrologists put their data in a certain format that may not be obvious to someone who is, for example, doing atmospheric studies, he said.

“The long-term goal of this effort is to improve the ability to do research,” Moore said. “If I’m a researcher in any given area, I’d like to be able to access data from other people working in the same area, collaborate with them, and then build a new collection that represents the new research results that are found. To do that, I need access to the old research results, to the observational data, to simulations or analyze what happens using computers, etc. These environments then greatly minimize the effort required to manage and distribute a collection and make it available to research.”

For science research as a whole, Ahalt said the infrastructure could mean a lot more than just managing the data deluge or sharing information within the different research communities.

“Data is the currency of the knowledge economy,” he said. “Right now, a lot of what we do collectively and globally from an economic standpoint is highly dependent on our ability to manipulate and analyze data. Data is also the currency of science; it’s our ability to have a national infrastructure that will allow us to share those scientific assets.”

The bottom line: “We’ll be more efficient at producing new science, new innovation and new innovation knowledge,” he said.

About the Author

Camille Tuutti is a staff writer covering the federal workforce.

Economics has met the enemy, and it is economics (Globe and Mail)

Adam Smith is considered the founding father of modern economics. - Adam Smith is considered the founding father of modern economics.

Adam Smith is considered the founding father of modern economics.

ira basen

From Saturday’s Globe and Mail
Published Saturday, Oct. 15, 2011 6:00AM EDT
Last updated Tuesday, Oct. 18, 2011 8:41AM EDT

After Thomas Sargent learned on Monday morning that he and colleague Christopher Sims had been awarded the Nobel Prize in Economics for 2011, the 68-year-old New York University professor struck an aw-shucks tone with an interviewer from the official Nobel website: “We’re just bookish types that look at numbers and try to figure out what’s going on.”

But no one who’d followed Prof. Sargent’s long, distinguished career would have been fooled by his attempt at modesty. He’d won for his part in developing one of economists’ main models of cause and effect: How can we expect people to respond to changes in prices, for example, or interest rates? According to the laureates’ theories, they’ll do whatever’s most beneficial to them, and they’ll do it every time. They don’t need governments to instruct them; they figure it out for themselves. Economists call this the “rational expectations” model. And it’s not just an abstraction: Bankers and policy-makers apply these formulae in the real world, so bad models lead to bad policy.

Which is perhaps why, by the end of that interview on Monday, Prof. Sargent was adopting a more realistic tone: “We experiment with our models,” he explained, “before we wreck the world.”

Rational-expectations theory and its corollary, the efficient-market hypothesis, have been central to mainstream economics for more than 40 years. And while they may not have “wrecked the world,” some critics argue these models have blinded economists to reality: Certain the universe was unfolding as it should, they failed both to anticipate the financial crisis of 2008 and to chart an effective path to recovery.

The economic crisis has produced a crisis in the study of economics – a growing realization that if the field is going to offer meaningful solutions, greater attention must be paid to what is happening in university lecture halls and seminar rooms.

While the protesters occupying Wall Street are not carrying signs denouncing rational-expectations and efficient-market modelling, perhaps they should be.

They wouldn’t be the first young dissenters to call economics to account. In June of 2000, a small group of elite graduate students at some of France’s most prestigious universities declared war on the economic establishment. This was an unlikely group of student radicals, whose degrees could be expected to lead them to lucrative careers in finance, business or government if they didn’t rock the boat. Instead, they protested – not about tuition or workloads, but that too much of what they studied bore no relation to what was happening outside the classroom walls.

They launched an online petition demanding greater realism in economics teaching, less reliance on mathematics “as an end in itself” and more space for approaches beyond the dominant neoclassical model, including input from other disciplines, such as psychology, history and sociology. Their conclusion was that economics had become an “autistic science,” lost in “imaginary worlds.” They called their movement Autisme-economie.

The students’ timing is notable: It was the spring of 2000, when the world was still basking in the glow of “the Great Moderation,” when for most of a decade Western economies had been enjoying a prolonged period of moderate but fairly steady growth.

Some economists were daring to think the unthinkable – that their understanding of how advanced capitalist economies worked had become so sophisticated that they might finally have succeeded in smoothing out the destructive gyrations of capitalism’s boom-and-bust cycle. (“The central problem of depression prevention has been solved,” declared another Nobel laureate, Robert Lucas of the University of Chicago, in 2003 – five years before the greatest economic collapse in more than half a century.)

The students’ petition sparked a lively debate. The French minister of education established a committee on economic education. Economics students across Europe and North America began meeting and circulating petitions of their own, even as defenders of the status quo denounced the movement as a Trotskyite conspiracy. By September, the first issue of the Post-Autistic Economic Newsletter was published in Britain.

As The Independent summarized the students’ message: “If there is a daily prayer for the global economy, it should be, ‘Deliver us from abstraction.’”

It seems that entreaty went unheard through most of the discipline before the economic crisis, not to mention in the offices of hedge funds and the Stockholm Nobel selection committee. But is it ringing louder now? And how did economics become so abstract in the first place?

The great classical economists of the late 18th and early 19th centuries had no problem connecting to the real world – the Industrial Revolution had unleashed profound social and economic changes, and they were trying to make sense of what they were seeing. Yet Adam Smith, who is considered the founding father of modern economics, would have had trouble understanding the meaning of the word “economist.”

What is today known as economics arose out of two larger intellectual traditions that have since been largely abandoned. One is political economy, which is based on the simple idea that economic outcomes are often determined largely by political factors (as well as vice versa). But when political-economy courses first started appearing in Canadian universities in the 1870s, it was still viewed as a small offshoot of a far more important topic: moral philosophy.

In The Wealth of Nations (1776), Adam Smith famously argued that the pursuit of enlightened self-interest by individuals and companies could benefit society as a whole. His notion of the market’s “invisible hand” laid the groundwork for much of modern neoclassical and neo-liberal, laissez-faire economics. But unlike today’s free marketers, Smith didn’t believe that the morality of the market was appropriate for society at large. Honesty, discipline, thrift and co-operation, not consumption and unbridled self-interest, were the keys to happiness and social cohesion. Smith’s vision was a capitalist economy in a society governed by non-capitalist morality.

But by the end of the 19th century, the new field of economics no longer concerned itself with moral philosophy, and less and less with political economy. What was coming to dominate was a conviction that markets could be trusted to produce the most efficient allocation of scarce resources, that individuals would always seek to maximize their utility in an economically rational way, and that all of this would ultimately lead to some kind of overall equilibrium of prices, wages, supply and demand.

Political economy was less vital because government intervention disrupted the path to equilibrium and should therefore be avoided except in exceptional circumstances. And as for morality, economics would concern itself with the behaviour of rational, self-interested, utility-maximizing Homo economicus. What he did outside the confines of the marketplace would be someone else’s field of study.

As those notions took hold, a new idea emerged that would have surprised and probably horrified Adam Smith – that economics, divorced from the study of morality and politics, could be considered a science. By the beginning of the 20th century, economists were looking for theorems and models that could help to explain the universe. One historian described them as suffering from “physics envy.” Although they were dealing with the behaviour of humans, not atoms and particles, they came to believe they could accurately predict the trajectory of human decision-making in the marketplace.

In their desire to have their field be recognized as a science, economists increasingly decided to speak the language of science. From Smith’s innovations through John Maynard Keynes’s work in the 1930s, economics was argued in words. Now, it would go by the numbers.

The turning point came in 1947, when Paul Samuelson’s classic book Foundations of Economic Analysis for the first time presented economics as a branch of applied mathematics. Without “the invigorating kiss of mathematical method,” Samuelson maintained, economists had been practising “mental gymnastics of a particularly depraved type,” like “highly trained athletes who never run a race.” After Samuelson, no economist could ever afford to make that mistake.

And that may have been the greatest mistake of all: In a post-crisis, 2009 essay in The New York Times Magazine, Princeton economist and Nobel laureate Paul Krugman wrote, “The central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that gave economists a chance to show off their mathematical prowess.”

Of course, nothing says science like a Nobel Prize. Prizes in chemistry, physics and medicine were first awarded in 1901, long before anyone would have thought that economics could or should be included. But by the late 1960s, the central bank of Sweden was determined to change that, and when the Nobel family objected, the bank agreed to put up the money itself, making it the only one of the prizes to be funded by taxpayers.

Officially, then, it is known as the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel – but that title is rarely used. On Monday morning, Prof. Sargent and Princeton University Prof. Sims were widely reported to have won the Nobel Prize in Economics.

The confusion is understandable, and deliberate, according to Philip Mirowski, an economic historian at the University of Notre Dame. “It’s part of the PR trick,” Prof. Mirowski argues. Awarding the economics prize immediately after the prizes for physics, chemistry and medicine helps to place economics on the same level as those other natural sciences.

The prize also has helped to transform one particular ideology into economic orthodoxy. Prof. Mirowski, who is co-writing a book on the history of the economics prize, notes that throughout the 1970s and 1980s, economists whose work supported neoclassical, pro-market, laissez-faire ideas won a disproportionate number of those honours, as well as support from the increasing numbers of well-funded think tanks and foundations that cleaved to the same lines. People who rejected those ideas, or were skeptical of the natural sciences model, were quickly marginalized, and their road to academic advancement often blocked.

The result was a homogenization of economic thought that Prof. Mirowski believes “has been pretty deleterious for economics on the whole.”

The road to hell is paved with good intentions,

rational expectations and efficient markets

Many critics of neo-classical economics argue that it has a powerful pro-market bias that’s provided an intellectual justification for politicians ideologically disposed to reduce government involvement in the economy.

The rational-expectations model, for example, assumes that consumers and producers all inform themselves with all available data, understand how the world around them operates and will therefore respond to the same stimulus in essentially the same way. That allows economists to mathematically forecast how these “representative” consumers and producers would behave.

During a recession, say, a well-meaning government might want to enhance benefits for the unemployed. Prof. Sargent, for one, would caution against that, because a “rational” unemployed worker might then calculate that it’s better to reject a lower-paying job. He’s blamed much of the chronically high unemployment in some European countries on the presence of an army of voluntarily unemployed workers, and spoken out against the Obama administration’s recent efforts to extend unemployment benefits.

Indeed, under the rational-expectations model, most market interventions by governments and central banks wind up looking counterproductive.

Meanwhile, the efficient-markets hypothesis, developed by University of Chicago economist Eugene Fama in the 1970s, has dominated thinking about financial markets. It posits that the prices of stocks and other financial assets are always “efficient” because they accurately reflect all the available information about economic fundamentals.

By this reasoning, there can be no speculative price bubbles or busts in the stock or housing markets, and speculators with evil intentions cannot successfully manipulate markets. Conveniently, since markets are self-stabilizing, there’s no need for government regulation of them.

Critics point out that both these theories tend to ignore what John Maynard Keynes called the “animal spirits” – playing down human irrationality, inefficiency, venality and ignorance. Those are qualities that are hard to plug into a mathematical equation that purports to model human behaviour.

These models also have failed to take into account the profound changes wrought by globalization, and the growing importance of banks, hedge funds and other financial institutions. Yet they have successfully provided a “scientific” cover for an anti-regulatory political agenda that is popular on Wall Street and in some Washington political circles.

Inside jobs: Pay no attention to that banker behind the curtain

The Great Depression of the 1930s led many economists of the day to question some of their discipline’s most fundamental assumptions and produced a decades-long heyday for Keynesian economics. So far, the Great Recession has led to less of a fundamental shift.

Notre Dame’s Prof. Mirowski believes that more rethinking is necessary. “Everyone thought the banks would have to change their behaviour, but they got bailed out and nothing changed. The economics profession has also been bailed out because it is so highly interlinked with the financial profession, so of course they don’t change. Why would they change?”

Indeed, economics may be the dismal science, but there is nothing dismal about the payoffs for those at the top of the heap serving as advisers and consultants and sitting on various boards. Unlike some disciplines, economics has no guidelines governing conflict of interest and disclosure.

In 2010, the Academy Award-winning documentary Inside Job exposed several disturbing examples of academic economists calling for deregulation while working for financial-services companies. And in a study of 19 prominent financial economists, published last year by the Political Economy Research Institute at the University of Massachusetts Amherst, 13 were found to own stock or sit on the boards of private financial institutions, but in only four cases were those affiliations revealed when they testified or wrote op-eds concerning financial regulation.

This year, the American Economics Association agreed to set up a committee to investigate whether economists should develop ethical guidelines similar to those already in place for sociologists, psychologists, statisticians and anthropologists.

But there appears to be little enthusiasm for the idea among mainstream economists. Prof. Lucas of the University of Chicago, in an interview with The New York Times, objected: “What disciplines economics, like any science, is whether your work can be replicated. It either stands up or it doesn’t. Your motivations and whatnot are secondary.”

Several billion pennies for their thoughts

The critics, however, are more numerous and considerably better financed than the French students a decade ago. In October, 2009, billionaire financier George Soros said that “the current paradigm has failed.” He resolved to help save economics from itself. He pledged $50-million toward the establishment of the New York-based Institute for New Economic Thinking (INET), with a mandate to promote changes in economic theory and practice through conferences, grants and campaigns for graduate and undergraduate education reforms.

Perry Mehrling, a professor of economics at New York’s Columbia University, is the chair of the curriculum task force at INET. He says his graduate students at Columbia are growing increasingly frustrated by at the tendency to define the discipline by its tools instead of its subject matter – like the students in Paris a decade ago, they find little relationship between the mathematical models in class and the world outside the door.

Prof. Mehrling believes that economics education has become far too insular. Never mind cross-disciplinary study – even courses in economic history and the history of economic thought have all but disappeared, so students spend almost no time reading Smith, Keynes or other past masters.

“It’s not just that we’re not listening to sociologists,” Prof. Mehrling laments. “We’re not even listening to economists.”

He says he has no problem with teaching efficient-markets and rational-expectations theories, but as hypothesis, not catechism. “I object to the idea that these are articles of faith and if you don’t accept them, you are not a member of the tribe. These things need to be questioned and we need a broader conversation.”

The challenge, as Columbia University economist Joseph Stiglitz said at the opening conference of INET, is that “we need better theories of persistent deviations from rationality.”

Some of those theories are coming from the rapidly growing field of behavioural economics, which borrows insights about human motivation from cognitive psychology: A paper titled The Hubris Hypothesis of Corporate Takeovers, for example, examines how the egos of ambitious chief executive officers can lead them to pursue takeovers, even when all available evidence suggests that the move could be a disaster.

It is not yet clear how such new approaches can evolve into workable models, but they hint at what a post-autistic economics might look like.

Prof. Mehrling is cautiously optimistic. “There’s a recognition that things we thought were true aren’t necessarily true,” he argues, “and the world is more complicated and interesting than we thought – so all bets are off, and that’s exciting intellectually.”

Change comes slowly in academia. The few jobs that are available don’t generally go to people who challenge orthodoxy. But over the next decade, as the post-crash crop of economics students make their impact felt in government, business and schools, the lessons learned may well seep into the mainstream.

Theories based on assumptions of rationality, efficiency and equilibrium in the marketplace are likely to be treated with a great deal more skepticism. Homo economicus is a lot more anxious, irrational, unpredictable and complex than most economists believed. And, as Adam Smith recognized, he has a moral and ethical dimension that should not be ignored.

Today, the Post-Autistic Economic Network continues to publish its newsletter, now known as the Real-World Economic Review. It remains a thorn in the side of mainstream economics. In an editorial in January, 2010, the editors called for major economics organizations to censure those economists who “through their teachings, pronouncements and policy recommendations facilitated the global financial collapse” and pointed to the “continuing moral crisis within the economics profession.”

It is unlikely that Prof. Sargent will acknowledge any of this when he travels to Stockholm to accept his (sort of) Nobel Prize in December. Nor is he likely to speak about what role, if any, his models really might have played in “wrecking the world.”

But he did make one concession in his interview with the Nobel website this week: “Many of the practical problems are ahead of where the models are,” he admitted. “That’s life.”

Ira Basen is a radio producer, journalist and educator based in Toronto.

Mais da metade dos alunos não sabe resolver operações matemáticas básicas (JC, O Globo)

JC e-mail 4331, de 26 de Agosto de 2011.

Prova ABC avaliou desempenho de recém-alfabetizados; em leitura, resultado foi melhor: 56,1% mostraram dominar a língua.

Resultados de um teste aplicado em seis mil alunos de todas as capitais e do Distrito Federal mostram que 57,2% dos estudantes do 3 º ano do ensino fundamental – a antiga 2ª série – não conseguem resolver problemas básicos de matemática, como soma ou subtração. Inédita no País, a Prova ABC também avaliou a aprendizagem de leitura e escrita. “A dificuldade é na hora de fazer a conta do ‘vai um'”, explicou ontem, em São Paulo, na divulgação dos resultados, o professor Rubem Klein, da Fundação Cesgranrio, referindo-se à soma de números superiores a uma dezena.

A Prova ABC, ou Avaliação Brasileira do Final do Ciclo de Alfabetização, foi realizada pelo movimento Todos Pela Educação, em parceria com o Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep), a Fundação Cesgranrio e o Instituto Paulo Montenegro/Ibope. O teste foi aplicado no início deste em estudantes de 250 escolas, conforme a proporção em cada rede (privada, estadual e municipal).

Em matemática, a média nacional de alunos do terceiro ano (2ª série) que aprenderam o esperado foi de 42,8%, o que significa que 57,2% não sabem o mínimo adequado para este período do aprendizado. As escolas privadas tiveram média de 74,3%, e as públicas, de apenas 32,6%, uma diferença de 41,7 pontos percentuais.

“Temos de levar em consideração que os professores dessas primeiras séries são formados em Pedagogia, curso que atrai pessoas de classes mais baixas e que não tiveram boa formação em matemática. É um ciclo vicioso que precisa ser rompido”, analisou o professor Paulo Horta, do Inep.

Teste de leitura: 43,9% não aprenderam o suficiente – A média nacional na prova de leitura foi 56,1%. Isso quer dizer que o restante, ou seja, 43,9% dos alunos, não aprenderam o suficiente. O índice dos que aprenderam o esperado chegou a 79% nas escolas particulares. Já nas públicas ficou em 48,6%.

Em escrita, o índice nacional dos que aprenderam o esperado caiu para 53,4%, ou seja, 46,6% não tiveram o aprendizado adequado. Nas escolas privadas, o aproveitamento foi 82,4%; nas públicas, 43,9%. “Mesmo com um índice melhor das escolas privadas, que são o objetivo dessa nova classe média para os seus filhos, elas não chegaram a 100%. E 100% significa apenas o que é esperado que as crianças tenham aprendido. No geral, a Prova ABC mostrou que as crianças que frequentam os três primeiros anos da escola não estão tendo garantido o direito básico que tem à aprendizagem”, observou a diretora-executiva do Todos Pela Educação, Priscila Cruz.

Assim como em outros índices da educação brasileira, o desempenho dos alunos do Sul e do Sudeste superou na Prova ABC os resultados das crianças do Norte e Nordeste, conforme o modelo do Sistema de Avaliação da Educação Básica (Saeb). A nota média nacional em matemática foi 171,07 (o desejado era 175), mas no Sul chegou a 185,64 e, no Sudeste, a 179,06. Por outro lado, foi bem menor no Norte (152,62), no Nordeste (158,19).

Regionalmente ainda os resultados da prova de leitura, conforme a escala Saeb de 175 pontos, não foram diferentes. No Sul a média foi 197,93, enquanto no Nordeste chegou a 167,37, uma diferença de 30 pontos. No Centro-Oeste a nota foi 196,57, no Sudeste 193,57 e no Norte 172,78.

Na prova de escrita – cuja nota média para um nível de aprendizagem considerado exitoso é 75, em uma escala de 0 a 100 – o Sudeste atingiu a média de 77,2 – uma diferença de 27 pontos em relação aos 50,2 do Nordeste.

Média nas escolas privadas foi 211; nas públicas, 158 – A metodologia da Prova ABC leva em conta a mesma escala Saeb, responsável por compor a nota do Índice de Desenvolvimento da Educação Básica (Ideb), que é o principal indicador de qualidade da educação do País. Nessa prova, como no Saeb, os alunos precisaram obter um resultado igual a 175 pontos para que o aprendizado equivalente ao terceiro ano (ou segunda série) seja considerado suficiente. Na prova escrita, no entanto, que foge do padrão Saeb, a nota média considerada de bom desempenho foi 75.

Por nota, a média nacional em matemática foi 171,1 (211,2 para escolas privadas e 158 para as públicas. Na prova de leitura, a nota média do País foi 185,8 (216,7 para as particulares e 175,8 para as públicas). Com outra escala de pontos, na prova escrita a nota média foi 68,1 (86,2 na rede privada e 62,3 na pública).

Em matemática, para conseguir os 175 pontos, as crianças teriam que demonstrar domínio de soma e subtração resolvendo problemas envolvendo, por exemplo, notas e moedas. Na prova de leitura, os alunos deveriam identificar temas de uma narrativa, identificar características de personagens em textos, como lendas, fábulas e histórias em quadrinhos, e perceber relações de causa e efeito nas narrativas Já na escrita foram exigidas três competências: adequação ao tema e ao gênero, coesão e coerência e registro (grafia, normas gramaticais, pontuação e segmentação de palavras).

David Graeber on the History of Debt (PBS, Naked Capitalism)

 

FRIDAY, AUGUST 26, 2011 (nakedcapitalism.com)
What is Debt? – An Interview with Economic Anthropologist David Graeber

David Graeber currently holds the position of Reader in Social Anthropology at Goldsmiths University London. Prior to this he was an associate professor of anthropology at Yale University. He is the author of ‘Debt: The First 5,000 Years’ which is available from Amazon.

Interview conducted by Philip Pilkington, a journalist and writer based in Dublin, Ireland.

Philip Pilkington: Let’s begin. Most economists claim that money was invented to replace the barter system. But you’ve found something quite different, am I correct?

David Graeber: Yes there’s a standard story we’re all taught, a ‘once upon a time’ — it’s a fairy tale.

It really deserves no other introduction: according to this theory all transactions were by barter. “Tell you what, I’ll give you twenty chickens for that cow.” Or three arrow-heads for that beaver pelt or what-have-you. This created inconveniences, because maybe your neighbor doesn’t need chickens right now, so you have to invent money.

The story goes back at least to Adam Smith and in its own way it’s the founding myth of economics. Now, I’m an anthropologist and we anthropologists have long known this is a myth simply because if there were places where everyday transactions took the form of: “I’ll give you twenty chickens for that cow,” we’d have found one or two by now. After all people have been looking since 1776, when the Wealth of Nations first came out. But if you think about it for just a second, it’s hardly surprising that we haven’t found anything.

Think about what they’re saying here – basically: that a bunch of Neolithic farmers in a village somewhere, or Native Americans or whatever, will be engaging in transactions only through the spot trade. So, if your neighbor doesn’t have what you want right now, no big deal. Obviously what would really happen, and this is what anthropologists observe when neighbors do engage in something like exchange with each other, if you want your neighbor’s cow, you’d say, “wow, nice cow” and he’d say “you like it? Take it!” – and now you owe him one. Quite often people don’t even engage in exchange at all – if they were real Iroquois or other Native Americans, for example, all such things would probably be allocated by women’s councils.

So the real question is not how does barter generate some sort of medium of exchange, that then becomes money, but rather, how does that broad sense of ‘I owe you one’ turn into a precise system of measurement – that is: money as a unit of account?

By the time the curtain goes up on the historical record in ancient Mesopotamia, around 3200 BC, it’s already happened. There’s an elaborate system of money of account and complex credit systems. (Money as medium of exchange or as a standardized circulating units of gold, silver, bronze or whatever, only comes much later.)

So really, rather than the standard story – first there’s barter, then money, then finally credit comes out of that – if anything its precisely the other way around. Credit and debt comes first, then coinage emerges thousands of years later and then, when you do find “I’ll give you twenty chickens for that cow” type of barter systems, it’s usually when there used to be cash markets, but for some reason – as in Russia, for example, in 1998 – the currency collapses or disappears.

PP: You say that by the time historical records start to be written in the Mesopotamia around 3200 BC a complex financial architecture is already in place. At the same time is society divided into classes of debtors and creditors? If not then when does this occur? And do you see this as the most fundamental class division in human history?

DG: Well historically, there seem to have been two possibilities.

One is what you found in Egypt: a strong centralized state and administration extracting taxes from everyone else. For most of Egyptian history they never developed the habit of lending money at interest. Presumably, they didn’t have to.

Mesopotamia was different because the state emerged unevenly and incompletely. At first there were giant bureaucratic temples, then also palace complexes, but they weren’t exactly governments and they didn’t extract direct taxes – these were considered appropriate only for conquered populations. Rather they were huge industrial complexes with their own land, flocks and factories. This is where money begins as a unit of account; it’s used for allocating resources within these complexes.

Interest-bearing loans, in turn, probably originated in deals between the administrators and merchants who carried, say, the woollen goods produced in temple factories (which in the very earliest period were at least partly charitable enterprises, homes for orphans, refugees or disabled people for instance) and traded them to faraway lands for metal, timber, or lapis lazuli. The first markets form on the fringes of these complexes and appear to operate largely on credit, using the temples’ units of account. But this gave the merchants and temple administrators and other well-off types the opportunity to make consumer loans to farmers, and then, if say the harvest was bad, everybody would start falling into debt-traps.

This was the great social evil of antiquity – families would have to start pawning off their flocks, fields and before long, their wives and children would be taken off into debt peonage. Often people would start abandoning the cities entirely, joining semi-nomadic bands, threatening to come back in force and overturn the existing order entirely. Rulers would regularly conclude the only way to prevent complete social breakdown was to declare a clean slate or ‘washing of the tablets,’ they’d cancel all consumer debt and just start over. In fact, the first recorded word for ‘freedom’ in any human language is the Sumerian amargi, a word for debt-freedom, and by extension freedom more generally, which literally means ‘return to mother,’ since when they declared a clean slate, all the debt peons would get to go home.

PP: You have noted in the book that debt is a moral concept long before it becomes an economic concept. You’ve also noted that it is a very ambivalent moral concept insofar as it can be both positive and negative. Could you please talk about this a little? Which aspect is more prominent?

DG: Well it tends to pivot radically back and forth.

One could tell the history like this: eventually the Egyptian approach (taxes) and Mesopotamian approach (usury) fuse together, people have to borrow to pay their taxes and debt becomes institutionalized.

Taxes are also key to creating the first markets that operate on cash, since coinage seems to be invented or at least widely popularized to pay soldiers – more or less simultaneously in China, India, and the Mediterranean, where governments find the easiest way to provision the troops is to issue them standard-issue bits of gold or silver and then demand everyone else in the kingdom give them one of those coins back again. Thus we find that the language of debt and the language of morality start to merge.

In Sanskrit, Hebrew, Aramaic, ‘debt,’ ‘guilt,’ and ‘sin’ are actually the same word. Much of the language of the great religious movements – reckoning, redemption, karmic accounting and the like – are drawn from the language of ancient finance. But that language is always found wanting and inadequate and twisted around into something completely different. It’s as if the great prophets and religious teachers had no choice but to start with that kind of language because it’s the language that existed at the time, but they only adopted it so as to turn it into its opposite: as a way of saying debts are not sacred, but forgiveness of debt, or the ability to wipe out debt, or to realize that debts aren’t real – these are the acts that are truly sacred.

How did this happen? Well, remember I said that the big question in the origins of money is how a sense of obligation – an ‘I owe you one’ – turns into something that can be precisely quantified? Well, the answer seems to be: when there is a potential for violence. If you give someone a pig and they give you a few chickens back you might think they’re a cheapskate, and mock them, but you’re unlikely to come up with a mathematical formula for exactly how cheap you think they are. If someone pokes out your eye in a fight, or kills your brother, that’s when you start saying, “traditional compensation is exactly twenty-seven heifers of the finest quality and if they’re not of the finest quality, this means war!”

Money, in the sense of exact equivalents, seems to emerge from situations like that, but also, war and plunder, the disposal of loot, slavery. In early Medieval Ireland, for example, slave-girls were the highest denomination of currency. And you could specify the exact value of everything in a typical house even though very few of those items were available for sale anywhere because they were used to pay fines or damages if someone broke them.

But once you understand that taxes and money largely begin with war it becomes easier to see what really happened. After all, every Mafiosi understands this. If you want to take a relation of violent extortion, sheer power, and turn it into something moral, and most of all, make it seem like the victims are to blame, you turn it into a relation of debt. “You owe me, but I’ll cut you a break for now…” Most human beings in history have probably been told this by their debtors. And the crucial thing is: what possible reply can you make but, “wait a minute, who owes what to who here?” And of course for thousands of years, that’s what the victims have said, but the moment you do, you are using the rulers’ language, you’re admitting that debt and morality really are the same thing. That’s the situation the religious thinkers were stuck with, so they started with the language of debt, and then they tried to turn it around and make it into something else.

PP: You’d be forgiven for thinking this was all very Nietzschean. In his ‘On the Genealogy of Morals’ the German philosopher Friedrich Nietzsche famously argued that all morality was founded upon the extraction of debt under the threat of violence. The sense of obligation instilled in the debtor was, for Nietzsche, the origin of civilisation itself. You’ve been studying how morality and debt intertwine in great detail. How does Nietzsche’s argument look after over 100 years? And which do you see as primal: morality or debt?

DG: Well, to be honest, I’ve never been sure if Nietzsche was really serious in that passage or whether the whole argument is a way of annoying his bourgeois audience; a way of pointing out that if you start from existing bourgeois premises about human nature you logically end up in just the place that would make most of that audience most uncomfortable.
In fact, Nietzsche begins his argument from exactly the same place as Adam Smith: human beings are rational. But rational here means calculation, exchange and hence, trucking and bartering; buying and selling is then the first expression of human thought and is prior to any sort of social relations.

But then he reveals exactly why Adam Smith had to pretend that Neolithic villagers would be making transactions through the spot trade. Because if we have no prior moral relations with each other, and morality just emerges from exchange, then ongoing social relations between two people will only exist if the exchange is incomplete – if someone hasn’t paid up.

But in that case, one of the parties is a criminal, a deadbeat and justice would have to begin with the vindictive punishment of such deadbeats. Thus he says all those law codes where it says ‘twenty heifers for a gouged-out eye’ – really, originally, it was the other way around. If you owe someone twenty heifers and don’t pay they gouge out your eye. Morality begins with Shylock’s pound of flesh.
Needless to say there’s zero evidence for any of this – Nietzsche just completely made it up. The question is whether even he believed it. Maybe I’m an optimist, but I prefer to think he didn’t.

Anyway it only makes sense if you assume those premises; that all human interaction is exchange, and therefore, all ongoing relations are debts. This flies in the face of everything we actually know or experience of human life. But once you start thinking that the market is the model for all human behavior, that’s where you end up with.

If however you ditch the whole myth of barter, and start with a community where people do have prior moral relations, and then ask, how do those moral relations come to be framed as ‘debts’ – that is, as something precisely quantified, impersonal, and therefore, transferrable – well, that’s an entirely different question. In that case, yes, you do have to start with the role of violence.

PP: Interesting. Perhaps this is a good place to ask you about how you conceive your work on debt in relation to the great French anthropologist Marcel Mauss’ classic work on gift exchange.

DG: Oh, in my own way I think of myself as working very much in the Maussian tradition. Mauss was one of the first anthropologists to ask: well, all right, if not barter, then what? What do people who don’t use money actually do when things change hands? Anthropologists had documented an endless variety of such economic systems, but hadn’t really worked out common principles. What Mauss noticed was that in almost all of them, everyone pretended as if they were just giving one another gifts and then they fervently denied they expected anything back. But in actual fact everyone understood there were implicit rules and recipients would feel compelled to make some sort of return.

What fascinated Mauss was that this seemed to be universally true, even today. If I take a free-market economist out to dinner he’ll feel like he should return the favor and take me out to dinner later. He might even think that he is something of chump if he doesn’t and this even if his theory tells him he just got something for nothing and should be happy about it. Why is that? What is this force that compels me to want to return a gift?

This is an important argument, and it shows there is always a certain morality underlying what we call economic life. But it strikes me that if you focus too much on just that one aspect of Mauss’ argument you end up reducing everything to exchange again, with the proviso that some people are pretending they aren’t doing that.

Mauss didn’t really think of everything in terms of exchange; this becomes clear if you read his other writings besides ‘The Gift’. Mauss insisted there were lots of different principles at play besides reciprocity in any society – including our own.

For example, take hierarchy. Gifts given to inferiors or superiors don’t have to be repaid at all. If another professor takes our economist out to dinner, sure, he’ll feel that he should reciprocate; but if an eager grad student does, he’ll probably figure just accepting the invitation is favor enough; and if George Soros buys him dinner, then great, he did get something for nothing after all. In explicitly unequal relations, if you give somebody something, far from doing you a favor back, they’re more likely to expect you to do it again.

Or take communistic relations – and I define this, following Mauss actually, as any ones where people interact on the basis of ‘from each according to their abilities to each according to their needs’. In these relations people do not rely on reciprocity, for example, when trying to solve a problem, even inside a capitalist firm. (As I always say, if somebody working for Exxon says, “hand me the screwdriver,” the other guy doesn’t say, “yeah and what do I get for it?”) Communism is in a way the basis of all social relations – in that if the need is great enough (I’m drowning) or the cost small enough (can I have a light?) everyone will be expected to act that way.

Anyway that’s one thing I got from Mauss. There are always going to be lots of different sorts of principles at play simultaneously in any social or economic system – which is why we can never really boil these things down to a science. Economics tries to, but it does it by ignoring everything except exchange.

PP: Let’s move onto economic theory then. Economics has some pretty specific theories about what money is. There’s the mainstream approach that we discussed briefly above; this is the commodity theory of money in which specific commodities come to serve as a medium of exchange to replace crude barter economies. But there’s also alternative theories that are becoming increasingly popular at the moment. One is the Circuitist theory of money in which all money is seen as a debt incurred by some economic agent. The other – which actually integrates the Circuitist approach – is the Chartalist theory of money in which all money is seen as a medium of exchange issued by the Sovereign and backed by the enforcement of tax claims. Maybe you could say something about these theories?

DG: One of my inspirations for ‘Debt: The First 5,000 Years’ was Keith Hart’s essay ‘Two Sides of the Coin’. In that essay Hart points out that not only do different schools of economics have different theories on the nature of money, but there is also reason to believe that both are right. Money has, for most of its history, been a strange hybrid entity that takes on aspects of both commodity (object) and credit (social relation.) What I think I’ve managed to add to that is the historical realization that while money has always been both, it swings back and forth – there are periods where credit is primary, and everyone adopts more or less Chartalist theories of money and others where cash tends to predominate and commodity theories of money instead come to the fore. We tend to forget that in, say, the Middle Ages, from France to China, Chartalism was just common sense: money was just a social convention; in practice, it was whatever the king was willing to accept in taxes.

PP: You say that history swings between periods of commodity money and periods of virtual money. Do you not think that we’ve reached a point in history where due to technological and cultural evolution we may have seen the end of commodity money forever?

DG: Well, the cycles are getting a bit tighter as time goes by. But I think we’ll still have to wait at least 400 years to really find out. It is possible that this era is coming to an end but what I’m more concerned with now is the period of transition.

The last time we saw a broad shift from commodity money to credit money it wasn’t a very pretty sight. To name a few we had the fall of the Roman Empire, the Kali Age in India and the breakdown of the Han dynasty… There was a lot of death, catastrophe and mayhem. The final outcome was in many ways profoundly libratory for the bulk of those who lived through it – chattel slavery, for example, was largely eliminated from the great civilizations. This was a remarkable historical achievement. The decline of cities actually meant most people worked far less. But still, one does rather hope the dislocation won’t be quite so epic in its scale this time around. Especially since the actual means of destruction are so much greater this time around.

PP: Which do you see as playing a more important role in human history: money or debt?

DG: Well, it depends on your definitions. If you define money in the broadest sense, as any unit of account whereby you can say 10 of these are worth 7 of those, then you can’t have debt without money. Debt is just a promise that can be quantified by means of money (and therefore, becomes impersonal, and therefore, transferable.) But if you are asking which has been the more important form of money, credit or coin, then probably I would have to say credit.

PP: Let’s move on to some of the real world problems facing the world today. We know that in many Western countries over the past few years households have been running up enormous debts, from credit card debts to mortgages (the latter of which were one of the root causes of the recent financial crisis). Some economists are saying that economic growth since the Clinton era was essentially run on an unsustainable inflating of household debt. From an historical perspective what do you make of this phenomenon?

DG: From an historical perspective, it’s pretty ominous. One could go further than the Clinton era, actually – a case could be made that we are seeing now is the same crisis we were facing in the 70s; it’s just that we managed to fend it off for 30 or 35 years through all these elaborate credit arrangements (and of course, the super-exploitation of the global South, through the ‘Third World Debt Crisis’.)

As I said Eurasian history, taken in its broadest contours, shifts back and forth between periods dominated by virtual credit money and those dominated by actual coin and bullion. The credit systems of the ancient Near East give way to the great slave-holding empires of the Classical world in Europe, India, and China, which used coinage to pay their troops. In the Middle Ages the empires go and so does the coinage – the gold and silver is mostly locked up in temples and monasteries – and the world reverts to credit. Then after 1492 or so you have the return world empires again; and gold and silver currency together with slavery, for that matter.

What’s been happening since Nixon went off the gold standard in 1971 has just been another turn of the wheel – though of course it never happens the same way twice. However, in one sense, I think we’ve been going about things backwards. In the past, periods dominated by virtual credit money have also been periods where there have been social protections for debtors. Once you recognize that money is just a social construct, a credit, an IOU, then first of all what is to stop people from generating it endlessly? And how do you prevent the poor from falling into debt traps and becoming effectively enslaved to the rich? That’s why you had Mesopotamian clean slates, Biblical Jubilees, Medieval laws against usury in both Christianity and Islam and so on and so forth.

Since antiquity the worst-case scenario that everyone felt would lead to total social breakdown was a major debt crisis; ordinary people would become so indebted to the top one or two percent of the population that they would start selling family members into slavery, or eventually, even themselves.

Well, what happened this time around? Instead of creating some sort of overarching institution to protect debtors, they create these grandiose, world-scale institutions like the IMF or S&P to protect creditors. They essentially declare (in defiance of all traditional economic logic) that no debtor should ever be allowed to default. Needless to say the result is catastrophic. We are experiencing something that to me, at least, looks exactly like what the ancients were most afraid of: a population of debtors skating at the edge of disaster.

And, I might add, if Aristotle were around today, I very much doubt he would think that the distinction between renting yourself or members of your family out to work and selling yourself or members of your family to work was more than a legal nicety. He’d probably conclude that most Americans were, for all intents and purposes, slaves.

PP: You mention that the IMF and S&P are institutions that are mainly geared toward extracting debts for creditors. This seems to have become the case in the European monetary union too. What do you make of the situation in Europe at the moment?

DG: Well, I think this is a prime example of why existing arrangements are clearly untenable. Obviously the ‘whole debt’ cannot be paid. But even when some French banks offered voluntary write-downs for Greece, the others insisted they would treat it as if it were a default anyway. The UK takes the even weirder position that this is true even of debts the government owes to banks that have been nationalized – that is, technically, that they owe to themselves! If that means that disabled pensioners are no longer able to use public transit or youth centers have to be closed down, well that’s simply the ‘reality of the situation,’ as they put it.

These ‘realities’ are being increasingly revealed to simply be ones of power. Clearly any pretence that markets maintain themselves, that debts always have to be honored, went by the boards in 2008. That’s one of the reasons I think you see the beginnings of a reaction in a remarkably similar form to what we saw during the heyday of the ‘Third World debt crisis’ – what got called, rather weirdly, the ‘anti-globalization movement’. This movement called for genuine democracy and actually tried to practice forms of direct, horizontal democracy. In the face of this there was the insidious alliance between financial elites and global bureaucrats (whether the IMF, World Bank, WTO, now EU, or what-have-you).

When thousands of people begin assembling in squares in Greece and Spain calling for real democracy what they are effectively saying is: “Look, in 2008 you let the cat out of the bag. If money really is just a social construct now, a promise, a set of IOUs and even trillions of debts can be made to vanish if sufficiently powerful players demand it then, if democracy is to mean anything, it means that everyone gets to weigh in on the process of how these promises are made and renegotiated.” I find this extraordinarily hopeful.

PP: Broadly speaking how do you see the present debt/financial crisis unravelling? Without asking you to peer into the proverbial crystal-ball – because that’s a silly thing to ask of anyone – how do you see the future unfolding; in the sense of how do you take your bearings right now?

DG: For the long-term future, I’m pretty optimistic. We might have been doing things backwards for the last 40 years, but in terms of 500-year cycles, well, 40 years is nothing. Eventually there will have to be recognition that in a phase of virtual money, safeguards have to be put in place – and not just ones to protect creditors. How many disasters it will take to get there? I can’t say.

But in the meantime there is another question to be asked: once we do these reforms, will the results be something that could even be called ‘capitalism’?

A cultura dos geoglifos (Fapesp)

HUMANIDADES | ARQUEOLOGIA
Enormes círculos e quadrados foram escavados no chão da Amazônia há 2 mil anos
Marcos Pivetta
Edição Impressa – Agosto 2011
© EDISON CAETANO
Desenho geométrico em Plácido de Castro, no Acre: palco de cerimônias

Houve uma época em que os deuses parecem ter sido geométricos num canto da Amazônia, o leste do Acre, perto da divisa com a Bolívia. E essa época provavelmente começou bem antes do que se pensava. Doze datações por radiocarbono feitas em diferentes setores de três sítios arqueológicos dessa região sinalizam que a construção dos chamados geoglifos – grandes desenhos escavados no solo da floresta por uma cultura pré-colombiana ainda não determinada, admiradora das linhas retas de quadrados e retângulos e dos traços arredondados de círculos e elipses – teve início há no mínimo 2 mil anos. Coordenado pela arqueóloga Denise Schaan, da Universidade Federal do Pará (UFPA), o novo estudo, cujo artigo está sendo finalizado antes de ser submetido à publicação numa revista científica, amplia a cronologia da cultura amazônica dos geoglifos. Até agora existia apenas o dado de uma datação feita em 2003 no Acre por pesquisadores finlandeses num desses sítios arqueológicos, que situava os desenhos como tendo sido produzidos entre os séculos XIII e XIV.

Feita a partir de restos de carvão queimado encontrados numa camada geológica rica em pedaços de cerâmica, um indicativo de que houve ali alguma presença humana, a nova série de datações também sugere que os desconhecidos autores dos geoglifos podem ter desaparecido antes da chegada dos europeus nas Américas. Nenhum dos três sítios estudados (Fazenda Colorada, Jacó Sá e Severino Calazas), situados num raio de 20 quilômetros dentro de uma área de platô, de terra firme, não inundável, entre os vales dos rios Acre e Iquiri, forneceu, até agora, elementos de que foram habitados por tribos há mais de 500 anos. “O resultado das datações foi uma surpresa”, diz Denise, que comanda os trabalhos arqueológicos sobre os geoglifos desde 2005 com verbas do CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico), da Academia de Ciências da Finlândia e do estado do Acre.

A idade dos desenhos geométricos, moldados no solo amazônico por meio da retirada de grandes quantidades de terra, não é o único ponto em revisão. A função primordial desses sítios, que podem apresentar mais de um tipo de geoglifo e vestígios de antigas estradas, também está em aberto. Desde os anos 1970, quando partes do Acre começaram a ser desmatadas por atividades agropecuárias e foram avistados os primeiros geoglifos em pontos até então cobertos pela floresta, os pesquisadores se indagam por que os antigos habitantes da região esculpiram círculos e quadrados em baixo-relevo no solo. A hipótese inicial de que as construções, cujos contornos são formados por valas contínuas abertas no terreno, poderiam ter tido funções defensivas, semelhantes à de um forte, parece fazer cada vez menos sentido. Escavações recentes feitas em quase uma dezena de sítios do Acre associados à ocorrência dos desenhos sinalizam que esses lugares não foram usados prioritariamente como moradia por povos antigos. Como uma espécie de praça tribal, a área interna dos geoglifos deve ter sido utilizada para cerimônias. “A evidência arqueológica sugere que esses sítios eram usados para encontros especiais, cultos religiosos e apenas ocasionalmente como aldeia”, diz Denise.

Quando iniciaram as incursões de campo, os pesquisadores trabalhavam com a ideia de que os sítios com geoglifos pudessem fornecer algum tipo de evidência de ocupação humana em larga escala e por um período prolongado em sua vizinhança. Afinal, é mais do que razoável supor que o povo responsável pela confecção dos grandes e precisos desenhos no solo era numeroso e apresentava uma estrutura social complexa. “Os construtores dos geoglifos não tinham pedras naquela região, mas fizeram enormes trabalhos na terra que demandavam poderio e habilidades de organização comparáveis à de outras civilizações antigas”, diz o arqueólogo Martti Pärssinen, do Instituto Ibero-americano da Finlândia, sediado em Madri, que colabora com a equipe brasileira e também um dos autores do trabalho com as novas datações dos geoglifos acreanos.

Em média, a área interna de um geoglifo varia de 1 a 3 hectares. As figuras menores apresentam geralmente linhas arredondadas, enquanto as maiores podem ser tanto círculos como quadrados. Nos sítios estudados, a profundidade dos buracos no solo que formam os traços dos desenhos variou de 35 centímetros a 5 metros (m) e a amplitude das valetas foi de 1,75 a 20 m. A terra retirada para abrir os fossos era usada pelos arquitetos dos geoglifos para fazer pequenas muretas, de até 1,5 m, que seguiam os contornos das figuras. Para dar conta de todo esse serviço, milhares de pessoas deveriam ter vivido em algum momento nos arredores dos geoglifos e trabalhado de forma coordenada para sua construção. Mas os achados arqueológicos nos sítios investigados em detalhe não ratificam, uma vez mais, o pressuposto inicial dos pesquisadores.

Ossadas humanas preservadas não foram encontradas em nenhum lugar. Não há também manchas da chamada terra preta, um tipo de solo negro muito comum em outras partes da Amazônia, que se forma a partir de restos orgânicos produzidos pelo estabelecimento de ocupações humanas prolongadas numa área. Os poucos artefatos associados a uma cultura material, em geral alguns pedaços de cerâmica, foram resgatados no topo ou no fundo das valas que formam as linhas geométricas ou em pequenos montículos de terra, provavelmente restos de habitações pré-históricas, que se situam bem ao lado dos contornos dos geoglifos. Dentro da área plana demarcada pelos misteriosos círculos e quadrados escavados no chão nada de realmente relevante foi resgatado. “Ainda precisamos achar os locais de moradia e cemitérios dos construtores dos geoglifos”, afirma o paleontólogo Alceu Ranzi, hoje professor aposentado na Universidade Federal do Acre (Ufac), a quem se deve a (re)descoberta dos desenhos no solo nas duas últimas décadas. “Eles devem ter vivido em algum lugar não muito longe dos sítios.”

© AGÊNCIA DE NOTÍCIAS DO ACRE E EDISON CAETANO / PROJETO GEOGLIFOS DA AMAZÔNIA OCIDENTAL
Diversidade de formas: geoglifos com linhas arredondadas e retas

A tecnologia aeroespacial tem sido uma aliada dos arqueólogos na tarefa de localizar e estudar os sítios amazônicos com geoglifos. Estar um pouco longe e acima dos desenhos, dentro de um avião ou tendo como olhos as lentes de um satélite, facilita o trabalho de procura das grandes figuras geométricas em meio a áreas desmatadas (se há floresta esse expediente não funciona). Inicialmente, os cientistas usaram as imagens gratuitas do serviço Google Earth para procurar novas ocorrências dos desenhos. A partir de 2007, com apoio do governo do Acre, obtiveram também as imagens do satélite taiwanês Formosat-2, que têm maior cobertura. Com o emprego dessas ferramentas de prospecção remota, a quantidade de sítios conhecidos com geoglifos deu um salto: saiu de 32 em 2005, chegou a 150 dois anos mais tarde e hoje está na casa dos 300. Esses são os números relativos ao Acre, que parece ter sido a região onde os desenhos se concentram e podem se espalhar por uma porção do estado com uma área de 25 mil quilômetros quadrados, 16 vezes o tamanho da cidade de São Paulo. Nos estados vizinhos do Amazonas e de Rondônia e também na Bolívia foram identificadas áreas com geoglifos por essa metodologia. “Não é mais tão fácil encontrar novos sítios, pois já fizemos várias varreduras sistemáticas”, explica a geógrafa Antonia Barbosa, da (Ufac), membro da equipe nacional que estudou os geoglifos. “Quando iniciamos o trabalho com imagens de satélite, encontrávamos em uma varredura uns 10 sítios. Hoje, com sorte, achamos um ou dois.”

Não há evidências concretas sobre quem foram os construtores dos geoglifos nem quanto tempo foi consumido nessa tarefa. A construção de valetas e muretas para cercar casas e aldeias já ocorria, por exemplo, na Europa há aproximadamente 10 mil anos, nos primórdios da agricultura. Mas na Amazônia esse tipo de construção é bem mais rara. Como até agora não há indícios de que a fronteira do Acre com a Bolívia foi a morada de uma única e grande civilização perdida, cujos restos das casas e grandes aldeias ninguém consegue encontrar, os arqueólogos passaram a trabalhar com um cenário intermediário. Não deve ter havido um enorme império perdido que cultuava deuses geométricos nesse canto da Amazônia, mas talvez dois ou três povos, ainda seminômades e espalhados por pequenas aldeias (hoje mais difíceis de serem encontradas), que partilhavam alguns traços culturais em comum, como a feitura dos geoglifos. “A sociedade dos geoglifos era de alguma forma complexa, mas estava num estágio formativo, de transição”, diz a arqueóloga Sanna Saunaluoma, da Universidade de Helsinque, que estuda os desenhos tanto na Bolívia como no Acre, aqui do lado dos brasileiros.

Membros das etnias Tacana e Aruaque, que hoje habitam respectivamente o lado boliviano e brasileiro dessa fronteira binacional, são apontados como os possíveis descendentes dos povos que tiveram a tradição de traçar enormes círculos e quadrados no solo. Mas, se um dia foram portadores dessa tradição comum, hoje não a professam mais. Para tornar o quadro mais incerto, não há provas de que as duas tribos estivessem realmente presentes nessa área na época em que os geoglifos foram feitos, tampouco se sabe qual era a divisa territorial que as separava. Uma pista, ainda tênue, de que ao menos uma dessas etnias, a Tacana, pode ter construído geoglifos vem de um texto do final do século XIX. O escrito relata o encontro de um coronel brasileiro, na divisa com a Bolívia, com 200 índios que moravam numa aldeia muito organizada e cultuavam deuses geométricos, talhados em madeira. A história não prova nada, mas pode ser um rastro a ser seguido.

The Mathematics of Changing Your Mind (N.Y. Times)

By JOHN ALLEN PAULOS
Published: August 5, 2011

Sharon Bertsch McGrayne introduces Bayes’s theorem in her new book with a remark by John Maynard Keynes: “When the facts change, I change my opinion. What do you do, sir?”

Illustration by Shannon May

THE THEORY THAT WOULD NOT DIE. How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines and Emerged Triumphant From Two Centuries of Controversy. By Sharon Bertsch McGrayne, 320 pp. Yale University Press. $27.50.

Bayes’s theorem, named after the 18th-century Presbyterian minister Thomas Bayes, addresses this selfsame essential task: How should we modify our beliefs in the light of additional information? Do we cling to old assumptions long after they’ve become untenable, or abandon them too readily at the first whisper of doubt? Bayesian reasoning promises to bring our views gradually into line with reality and so has become an invaluable tool for scientists of all sorts and, indeed, for anyone who wants, putting it grandiloquently, to sync up with the universe. If you are not thinking like a Bayesian, perhaps you should be.

At its core, Bayes’s theorem depends upon an ingenious turnabout: If you want to assess the strength of your hypothesis given the evidence, you must also assess the strength of the evidence given your hypothesis. In the face of uncertainty, a Bayesian asks three questions: How confident am I in the truth of my initial belief? On the assumption that my original belief is true, how confident am I that the new evidence is accurate? And whether or not my original belief is true, how confident am I that the new evidence is accurate? One proto-Bayesian, David Hume, underlined the importance of considering evidentiary probability properly when he questioned the authority of religious hearsay: one shouldn’t trust the supposed evidence for a miracle, he argued, unless it would be even more miraculous if the report were untrue.

The theorem has a long and surprisingly convoluted history, and McGrayne chronicles it in detail. It was Bayes’s friend Richard Price, an amateur mathematician, who developed Bayes’s ideas and probably deserves the glory that would have resulted from a Bayes-Price theorem. After Price, however, Bayes’s theorem lapsed into obscurity until the illustrious French mathematician Pierre Simon Laplace extended and applied it in clever, nontrivial ways in the early 19th century. Thereafter it went in and out of fashion, was applied in one field after another only to be later condemned for being vague, subjective or unscientific, and became a bone of contention between rival camps of mathematicians before enjoying a revival in recent years.

The theorem itself can be stated simply. Beginning with a provisional hypothesis about the world (there are, of course, no other kinds), we assign to it an initial probability called the prior probability or simply the prior. After actively collecting or happening upon some potentially relevant evidence, we use Bayes’s theorem to recalculate the probability of the hypothesis in light of the new evidence. This revised probability is called the posterior probability or simply the posterior. Specifically Bayes’s theorem states (trumpets sound here) that the posterior probability of a hypothesis is equal to the product of (a) the prior probability of the hypothesis and (b) the conditional probability of the evidence given the hypothesis, divided by (c) the probability of the new evidence.

Consider a concrete example. Assume that you’re presented with three coins, two of them fair and the other a counterfeit that always lands heads. If you randomly pick one of the three coins, the probability that it’s the counterfeit is 1 in 3. This is the prior probability of the hypothesis that the coin is counterfeit. Now after picking the coin, you flip it three times and observe that it lands heads each time. Seeing this new evidence that your chosen coin has landed heads three times in a row, you want to know the revised posterior probability that it is the counterfeit. The answer to this question, found using Bayes’s theorem (calculation mercifully omitted), is 4 in 5. You thus revise your probability estimate of the coin’s being counterfeit upward from 1 in 3 to 4 in 5.

A serious problem arises, however, when you apply Bayes’s theorem to real life: it’s often unclear what initial probability to assign to a hypothesis. Our intuitions are embedded in countless narratives and arguments, and so new evidence can be filtered and factored into the Bayes probability revision machine in many idiosyncratic and incommensurable ways. The question is how to assign prior probabilities and evaluate evidence in situations much more complicated than the tossing of coins, situations like global warming or autism. In the latter case, for example, some might have assigned a high prior probability to the hypothesis that the thimerosal in vaccines causes autism. But then came new evidence — studies showing that permanent removal of the compound from these vaccines did not lead to a decline in autism. The conditional probability of this evidence given the thimerosal hypothesis is tiny at best and thus a convincing reason to drastically lower the posterior probability of the hypothesis. Of course, people wedded to their priors can always try to rescue them from the evidence by introducing all sorts of dodges. Witness die-hard birthers and truthers, for example.

McGrayne devotes much of her book to Bayes’s theorem’s many remarkable contributions to history: she discusses how it was used to search for nuclear weapons, devise actuarial tables, demonstrate that a document seemingly incriminating Colonel Dreyfus was most likely a forgery, improve low-resolution computer images, judge the authorship of the disputed Federalist papers and determine the false positive rate of mammograms. She also tells the story of Alan Turing and others whose pivotal crypto-analytic work unscrambling German codes may have helped shorten World War II.

Statistics is an imperialist discipline that can be applied to almost any area of science or life, and this litany of applications is intended to be the unifying thread that sews the book into a coherent whole. It does so, but at the cost of giving it a list-like, formulaic feel. More successful are McGrayne’s vivifying sketches of the statisticians who devoted themselves to Bayesian polemics and counterpolemics. As McGrayne amply shows, orthodox Bayesians have long been opposed, sometimes vehemently, by so-called frequentists, who have objected to their tolerance for subjectivity. The nub of the differences between them is that for Bayesians the prior can be a subjective expression of the degree of belief in a hypothesis, even one about a unique event or one that has as yet never occurred. For frequentists the prior must have a more objective foundation; ideally that is the relative frequency of events in repeatable, well-defined experiments. McGrayne’s statisticians exhibit many differences, and she cites the quip that you can nevertheless always tell them apart by their posteriors, a good word on which to end.

John Allen Paulos, a professor of mathematics at Temple University, is the author of several books, including “Innumeracy” and, most recently, “Irreligion.”

Ninguna consultora pudo pronosticar la diferencia entre Macri y Filmus (Clarin)

[Eleições municipais em Buenos Aires]
11/07/11
Las encuestadoras oficialistas erraron por 13 puntos la brecha entre los dos primeros

Por MARTÍN BRAVO

Dos consultoras difundieron enc uestas en la semana previa a estas elecciones en la Ciudad con una diferencia de prácticamente 15 puntos en la intención de voto entre Mauricio Macri y Daniel Filmus.

El resto indicaba una brecha menor, y más corta cuanto más cercano al kirchnerismo era el encargado del relevamiento.

Al cierre de esta edición, los datos oficiales marcaban que el actual jefe de Gobierno cosechaba 803.486 votos (un 47,06%) y el senador kirchnerista 475.364 (un 27,84%). Con el 96,31% de las mesas escrutadas, la diferencia a favor de Macri daba casi 19,3%. Management & Fit difundió sus últimos números el 4 de julio, con 42,2% para Macri y 27,4% para Filmus, es decir 14,8% de distancia. La misma brecha proyectó la encuesta de Poliarquía, encargada por el diario La Nación y publicada el viernes: 45,3% a 30,5%.

Aresco, una encuestadora de consulta habitual del Gobierno nacional, difundió 48 horas antes de la elección -no publicada por la veda- una intención de voto del 43,8% para Macri sobre el 34,3% para Filmus, es decir una ventaja de 9,5%. Ya en el boca de urna dio una distancia bastante mayor: 44% a 31%, aunque también por debajo de la ventaja final. Con el resultado puesto, en ningún caso quedó cerca. OPSM difundió el jueves sus números, a los que accedió este diario, con 38% para Macri y 27,8% para Filmus. Una diferencia del 10,2%, apenas menor a su boca de urna: 44% a 33,7%. Esta consultora se presenta como independiente, aunque el búnker kirchnerista le presta mucha atención.

Rouvier & Asociados, que en esta campaña trabajó para Filmus, daba una diferencia de 6,4 puntos una semana antes de la elección (36,2% a 29,8%) y de 7,4 (42,6% a 35,2%) el último jueves. Analogías, que presentó su trabajo como independiente, asignó ocho días atrás una intención de voto a Macri de 38,2% y a Filmus de 29,7%, una brecha de 8,5%. En sus últimas mediciones, no publicadas por Clarín porque ya regía la veda electoral, la diferencia crecía 1,5 puntos: 42% a 32%.

CEOP, la encuestadora que en 2009 anunció un triunfo de Néstor Kirchner por 8 puntos en la provincia (perdió por 3%) atribuía 36,6% a Macri y 30,5% a Filmus, una distancia de 6,1% . Esta consultora, que también mide habitualmente para el Gobierno nacional, dio con amplia ventaja a Rosana Bertone en el balotaje de Tierra del Fuego, en el que Fabiana Ríos consiguió su reelección.

Fernando Solanas llevaba el 12,82% de los votos, según los datos oficiales al cierre de esta edición, un porcentaje superior al que medía en la mayoría de las encuestas, entre ellas Aresco (8,6%), OPSM (8,9%) y Rouvier & Asociados (10%). Pero peor quedó con el resultado puesto el sondeo encargado por su espacio, Proyecto Sur, a la consultora Panorama: difundió el jueves que el cineasta tenía una inteción de voto del 24,5%, por encima de Filmus, y también lo daba ganador sobre Macri en una eventual segunda vuelta.

Tampoco anticipó el holgado triunfo del actual jefe de Gobierno la encuesta de Nueva Comunicación, encargada por Jorge Telerman: su última medición, el 2 de julio, le asignó a Macri una intención de voto del 39,1% sobre el 31,8% para Filmus, una ventaja de 7,3%. A Telerman le daba el 6,9%, lo que le hubiera permitido asegurarse su banca como legislador. Al cierre de esta edici ón tenía 1,76%.

The Controversy about Hypothesis Testing

From an interesting call for papers:

“Scientists spend a lot of time testing a hypothesis, and classifying experimental results as (in)significant evidence. But even after a century of hot debate, there is no consensus on what this concept of significance implies, how the results of hypothesis tests should be interpreted, and which practical pitfalls have to be avoided. Take the fierce criticisms of significance testing in economics, take the endless debate about statistical reform in psychology, take the foundational disagreement between frequentists and Bayesians about what constitutes statistical evidence.”

(Link to the conference here).

Ordem no caos (FAPESP)

31/05/2011

Por Elton Alisson

Pesquisadores desenvolvem modelo teórico para explicar e determinar as condições para a ocorrência de sincronização isócrona em sistemas caóticos. Estudo pode levar à melhoria de sistemas como o de telecomunicações.

Agência FAPESP – Na natureza, enxames de vaga-lumes enviam sinais luminosos uns para os outros. Isso é feito inicialmente de forma autônoma, individual e independente e, sob determinadas circunstâncias, pode dar origem a um fenômeno robusto de natureza coletiva chamado sincronização. Como resultado, milhares de vaga-lumes piscam em uníssono, de forma ritmada, emitindo sinais luminosos em sincronia com os demais.

Há pouco mais de 20 anos se descobriu que a sincronização também ocorre em sistemas caóticos – sistemas complexos de comportamento imprevisível nas mais variadas áreas, como economia, clima ou agricultura. Outra descoberta mais recente foi que a sincronização resiste a atrasos na propagação de sinais emitidos.

Nessas situações, sob determinadas circunstâncias, a sincronização pode emergir em sua forma isócrona, isto é, com atraso zero. Isso significa que equipamentos como osciladores estão perfeitamente sincronizados no tempo, mesmo recebendo sinais atrasados dos demais. Entretanto, os modelos teóricos desenvolvidos para explicar o fenômeno não levaram esse fato em consideração até o momento.

Uma nova pesquisa realizada por cientistas do Instituto Tecnológico de Aeronáutica (ITA) e do Instituto Nacional de Pesquisas Espaciais (Inpe) resultou em um modelo teórico para demonstrar como a sincronização ocorre quando há atraso na emissão e no recebimento de informação entre osciladores caóticos.

Os resultados do estudo, que podem ser utilizados para aprimorar sistemas tecnológicos, foram publicados em abril no periódico Journal of Physics A: Mathematical and Theoretical.

Durante o estudo, os pesquisadores buscaram explicar a sincronização quando há atraso no recebimento da informação entre os osciladores caóticos. O objetivo é determinar as condições sob as quais o fenômeno ocorre em sistemas reais.

“Utilizando a teoria da estabilidade de Lyapunov-Krasovskii, que trata do problema da estabilidade em sistemas dinâmicos, estabelecemos critérios de estabilidade que, a partir de parâmetros como o tempo de atraso no recebimento das informações entre os osciladores, permitem determinar se os osciladores entrarão em estado de sincronização isócrona”, disse um dos autores do artigo, José Mario Vicensi Grzybowski, à Agência FAPESP.

“Foi a primeira demonstração de forma totalmente analítica da estabilidade da sincronização isócrona. Não há similares na literatura”, afirmou Grzybowski, que realiza trabalho de doutorado em engenharia eletrônica e computação no ITA com Bolsa da FAPESP.

As descobertas do estudo poderão possibilitar o aprimoramento de sistemas tecnológicos baseados em sincronização, especialmente em sistemas de telecomunicação baseados em caos.

Além disso, entre as possíveis aplicações estão os satélites em formação de voo, em que um precisa manter uma distância relativa adequada em relação aos outros e, ao mesmo tempo, estabelecer um referencial (sincronização) que permita o intercâmbio de informações, coleta e combinação eletrônica de imagens oriundas dos diversos satélites da formação.

“Nesse caso, o referencial pode ser estabelecido por meio de um fenômeno que emerge naturalmente desde que as condições apropriadas sejam proporcionadas, diminuindo ou até dispensando o uso de algoritmos”, disse.

Redes complexas naturais

Veículos aéreos não tripulados, que podem explorar uma determinada região em conjunto, além de robôs e sistemas de controle distribuídos, que também precisam trabalhar de forma coordenada em uma rede, podem utilizar os resultados da pesquisa.

Os autores do estudo também pretendem fazer com que o fenômeno da sincronização ocorra em sistemas tecnológicos sem a necessidade de existir um líder que oriente a forma como os outros agentes osciladores devem se comportar.

“Pretendemos eliminar a figura do líder e fazer com que a sincronização ocorra em função da interação entre os agentes, como ocorre com uma espécie de vaga-lumes na Ásia, que entra em sincronização sem que um deles lidere”, disse Elbert Einstein Macau, pesquisador do Inpe e outro autor do estudo, do qual participou também Takashi Yoneyama, do ITA.

Segundo eles, nessa pesquisa foi analisada a sincronização com um atraso de tempo na transmissão da informação entre dois osciladores. Mas no trabalho que desenvolvem atualmente os resultados serão expandidos para uma rede de osciladores de modo a ampliar a escala do problema, e de sua solução.

Dessa forma, segundo eles, será possível modelar fenômenos baseados na sincronização isócrona em escala de rede e contemplar fenômenos naturais que apresentam nível de complexidade muitas vezes superior.

“Em princípio, qualquer fenômeno real que se baseia na sincronização isócrona poderá ser tratado a partir desses elementos teóricos, que podem servir para projetos de redes tecnológicas, ou para analisar e compreender comportamentos emergentes em redes naturais, mesmo naquelas em que não temos formas de influir diretamente”, disse Grzybowski.

O artigo Stability of isochronal chaos synchronization (doi:10.1088/1751-8113/44/17/175103) pode ser lido em http://iopscience.iop.org/1751-8121/44/17/175103/pdf/1751-8121_44_17_175103.pdf

Intuitions Regarding Geometry Are Universal, Study Suggests (ScienceDaily)

ScienceDaily (May 26, 2011) — All human beings may have the ability to understand elementary geometry, independently of their culture or their level of education.

A Mundurucu participant measuring an angle using a goniometer laid on a table. (Credit: © Pierre Pica / CNRS)

This is the conclusion of a study carried out by CNRS, Inserm, CEA, the Collège de France, Harvard University and Paris Descartes, Paris-Sud 11 and Paris 8 universities (1). It was conducted on Amazonian Indians living in an isolated area, who had not studied geometry at school and whose language contains little geometric vocabulary. Their intuitive understanding of elementary geometric concepts was compared with that of populations who, on the contrary, had been taught geometry at school. The researchers were able to demonstrate that all human beings may have the ability of demonstrating geometric intuition. This ability may however only emerge from the age of 6-7 years. It could be innate or instead acquired at an early age when children become aware of the space that surrounds them. This work is published in thePNAS.

Euclidean geometry makes it possible to describe space using planes, spheres, straight lines, points, etc. Can geometric intuitions emerge in all human beings, even in the absence of geometric training?

To answer this question, the team of cognitive science researchers elaborated two experiments aimed at evaluating geometric performance, whatever the level of education. The first test consisted in answering questions on the abstract properties of straight lines, in particular their infinite character and their parallelism properties. The second test involved completing a triangle by indicating the position of its apex as well as the angle at this apex.

To carry out this study correctly, it was necessary to have participants that had never studied geometry at school, the objective being to compare their ability in these tests with others who had received training in this discipline. The researchers focused their study on Mundurucu Indians, living in an isolated part of the Amazon Basin: 22 adults and 8 children aged between 7 and 13. Some of the participants had never attended school, while others had been to school for several years, but none had received any training in geometry. In order to introduce geometry to the Mundurucu participants, the scientists asked them to imagine two worlds, one flat (plane) and the second round (sphere), on which were dotted villages (corresponding to the points in Euclidean geometry) and paths (straight lines). They then asked them a series of questions illustrated by geometric figures displayed on a computer screen.

Around thirty adults and children from France and the United States, who, unlike the Mundurucu, had studied geometry at school, were also subjected to the same tests.

The result was that the Mundurucu Indians proved to be fully capable of resolving geometric problems, particularly in terms of planar geometry. For example, to the question Can two paths never cross?, a very large majority answered Yes. Their responses to the second test, that of the triangle, highlight the intuitive character of an essential property in planar geometry, namely the fact that the sum of the angles of the apexes of a triangle is constant (equal to 180°).

And, in a spherical universe, it turns out that the Amazonian Indians gave better answers than the French or North American participants who, by virtue of learning geometry at school, acquire greater familiarity with planar geometry than with spherical geometry. Another interesting finding was that young North American children between 5 and 6 years old (who had not yet been taught geometry at school) had mixed test results, which could signify that a grasp of geometric notions is acquired from the age of 6-7 years.

The researchers thus suggest that all human beings have an ability to understand Euclidean geometry, whatever their culture or level of education. People who have received no, or little, training could thus grasp notions of geometry such as points and parallel lines. These intuitions could be innate (they may then emerge from a certain age, as it happens 6-7 years). If, on the other hand, these intuitions derive from learning (between birth and 6-7 years of age), they must be based on experiences common to all human beings.

(1) The two CNRS researchers involved in this study are Véronique Izard of the Laboratoire Psychologie de la Perception (CNRS / Université Paris Descartes) and Pierre Pica of the Unité ?Structures Formelles du Langage? (CNRS / Université Paris 8). They conducted it in collaboration with Stanislas Dehaene, professor at the Collège de France and director of the Unité de Neuroimagerie Cognitive à NeuroSpin (Inserm / CEA / Université Paris-Sud 11) and Elizabeth Spelke, professor at Harvard University.

Journal ReferenceVéronique Izard, Pierre Pica, Elizabeth S. Spelke, and Stanislas Dehaene. Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 23 May 2011.