Arquivo da tag: Evolução

Bonobo genius makes stone tools like early humans did (New Scientist)

13:09 21 August 2012 by Hannah Krakauer

Kanzi the bonobo continues to impress. Not content with learning sign language or making up “words” for things like banana or juice, he now seems capable of making stone tools on a par with the efforts of early humans.

Even a human could manage this <i>(Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))</i>

Even a human could manage this (Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))

Eviatar Nevo of the University of Haifa in Israel and his colleagues sealed food inside a log to mimic marrow locked inside long bones, and watched Kanzi, a 30-year-old male bonobo chimp, try to extract it. While a companion bonobo attempted the problem a handful of times, and succeeded only by smashing the log on the ground, Kanzi took a longer and arguably more sophisticated approach.

Both had been taught to knap flint flakes in the 1990s, holding a stone core in one hand and using another as a hammer. Kanzi used the tools he created to come at the log in a variety of ways: inserting sticks into seams in the log, throwing projectiles at it, and employing stone flints as choppers, drills, and scrapers. In the end, he got food out of 24 logs, while his companion managed just two.

Perhaps most remarkable about the tools Kanzi created is their resemblance to early hominid tools. Both bonobos made and used tools to obtain food – either by extracting it from logs or by digging it out of the ground. But only Kanzi’s met the criteria for both tool groups made by early Homo: wedges and choppers, and scrapers and drills.

Do Kanzi’s skills translate to all bonobos? It’s hard to say. The abilities of animals like Alex the parrot, who could purportedly count to six, and Betty the crow, who crafted a hook out of wire, sometimes prompt claims about the intelligence of an entire species. But since these animals are raised in unusual environments where they frequently interact with humans, their cases may be too singular to extrapolate their talents to their brethren.

The findings will fuel the ongoing debate over whether stone tools mark the beginning of modern human culture, or predate our Homo genus. They appear to suggest the latter – though critics will point out that Kanzi and his companion were taught how to make the tools. Whether the behaviour could arise in nature is unclear.

Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1212855109

Populations Survive Despite Many Deleterious Mutations: Evolutionary Model of Muller’s Ratchet Explored (Science Daily)

ScienceDaily (Aug. 10, 2012) — From protozoans to mammals, evolution has created more and more complex structures and better-adapted organisms. This is all the more astonishing as most genetic mutations are deleterious. Especially in small asexual populations that do not recombine their genes, unfavourable mutations can accumulate. This process is known as Muller’s ratchet in evolutionary biology. The ratchet, proposed by the American geneticist Hermann Joseph Muller, predicts that the genome deteriorates irreversibly, leaving populations on a one-way street to extinction.

Equilibrium of mutation and selection processes: A population can be divided into groups of individuals that carry different numbers of deleterious mutations. Groups with few mutations are amplified by selection but loose members to other groups by mutation. Groups with many mutations don’t reproduce as much, but gain members by mutation. (Credit: © Richard Neher/MPI for Developmental Biology)

In collaboration with colleagues from the US, Richard Neher from the Max Planck Institute for Developmental Biology has shown mathematically how Muller’s ratchet operates and he has investigated why populations are not inevitably doomed to extinction despite the continuous influx of deleterious mutations.

The great majority of mutations are deleterious. “Due to selection individuals with more favourable genes reproduce more successfully and deleterious mutations disappear again,” explains the population geneticist Richard Neher, leader of an independent Max Planck research group at the Max Planck Institute for Developmental Biology in Tübingen, Germany. However, in small populations such as an asexually reproducing virus early during infection, the situation is not so clear-cut. “It can then happen by chance, by stochastic processes alone, that deleterious mutations in the viruses accumulate and the mutation-free group of individuals goes extinct,” says Richard Neher. This is known as a click of Muller’s ratchet, which is irreversible — at least in Muller’s model.

Muller published his model on the evolutionary significance of deleterious mutations in 1964. Yet to date a quantitative understanding of the ratchet’s processes was lacking. Richard Neher and Boris Shraiman from the University of California in Santa Barbara have now published a new theoretical study on Muller’s ratchet. They chose a comparably simple model with only deleterious mutations all having the same effect on fitness. The scientists assumed selection against those mutations and analysed how fluctuations in the group of the fittest individuals affected the less fit ones and the whole population. Richard Neher and Boris Shraiman discovered that the key to the understanding of Muller’s ratchet lies in a slow response: If the number of the fittest individuals is reduced, the mean fitness decreases only after a delay. “This delayed feedback accelerates Muller’s ratchet,” Richard Neher comments on the results. It clicks more and more frequently.

“Our results are valid for a broad range of conditions and parameter values — for a population of viruses as well as a population of tigers.” However, he does not expect to find the model’s conditions one-to-one in nature. “Models are made to understand the essential aspects, to identify the critical processes,” he explains.

In a second study Richard Neher, Boris Shraiman and several other US-scientists from the University of California in Santa Barbara and Harvard University in Cambridge investigated how a small asexual population could escape Muller’s ratchet. “Such a population can only stay in a steady state for a long time when beneficial mutations continually compensate for the negative ones that accumulate via Muller’s ratchet,” says Richard Neher. For their model the scientists assumed a steady environment and suggest that there can be a mutation-selection balance in every population. They have calculated the rate of favourable mutations required to maintain the balance. The result was surprising: Even under unfavourable conditions, a comparably small proportion in the range of several percent of positive mutations is sufficient to sustain a population.

These findings could explain the long-term maintenance of mitochondria, the so-called power plants of the cell that have their own genome and divide asexually. By and large, evolution is driven by random events or as Richard Neher says: “Evolutionary dynamics are very stochastic.”

Why Do Organisms Build Tissues They Seemingly Never Use? (Science Daily)

ScienceDaily (Aug. 10, 2012) — Why, after millions of years of evolution, do organisms build structures that seemingly serve no purpose?

A study conducted at Michigan State University and published in the current issue of The American Naturalist investigates the evolutionary reasons why organisms go through developmental stages that appear unnecessary.

“Many animals build tissues and structures they don’t appear to use, and then they disappear,” said Jeff Clune, lead author and former doctoral student at MSU’s BEACON Center of Evolution in Action. “It’s comparable to building a roller coaster, razing it and building a skyscraper on the same ground. Why not just skip ahead to building the skyscraper?”

Why humans and other organisms retain seemingly unnecessary stages in their development has been debated between biologists since 1866. This study explains that organisms jump through these extra hoops to avoid disrupting a developmental process that works. Clune’s team called this concept the “developmental disruption force.” But Clune says it also could be described as “if the shoe fits, don’t change a thing.”

“In a developing embryo, each new structure is built in a delicate environment that consists of everything that has already developed,” said Clune, who is now a postdoctoral fellow at Cornell University. “Mutations that alter that environment, such as by eliminating a structure, can thus disrupt later stages of development. Even if a structure is not actually used, it may set the stage for other functional tissues to grow properly.”

Going back to the roller coaster metaphor, even though the roller coaster gets torn down, the organism needs the parts from that teardown to build the skyscraper, he added.

“An engineer would simply skip the roller coaster step, but evolution is more of a tinkerer and less of an engineer,” Clune said. “It uses whatever parts that are lying around, even if the process that generates those parts is inefficient.”

An interesting consequence is that newly evolved traits tend to get added at the end of development, because there is less risk of disrupting anything important. That, in turn, means that there is a similarity between the order things evolve and the order they develop.

A new technology called computational evolution allowed the team to conduct experiments that would be impossible to reproduce in nature.

Rather than observe embryos grow, the team of computer scientists and biologists used BEACON’s Avida software to perform experiments with evolution inside a computer. The Avidians — self-replicating computer programs — mutate, compete for resources and evolve, mimicking natural selection in real-life organisms. Using this software, Clune’s team observed as Avidians evolved to perform logic tasks. They recorded the order that those tasks evolved in a variety of lineages, and then looked at the order those tasks developed in the final, evolved organism.

They were able to help settle an age-old debate that developmental order does resemble evolutionary order, at least in this computationally evolving system. Because in a computer thousands of generations can happen overnight, the team was able to repeat this experiment many times to document that this similarity repeatedly occurs.

Additional MSU researchers contributing to the study included BEACON colleagues Richard Lenski, Robert Pennock and Charles Ofria. The research was funded by the National Science Foundation.

Programa de computador mimetiza evolução humana (Fapesp)

Software desenvolvido na USP de São Carlos cria e seleciona programas geradores de Árvores de Decisão, ferramentas capazes de fazer previsões. Pesquisa foi premiada nos Estados Unidos, no maior evento de computação evolutiva (Wikimedia)

16/08/2012

Por Karina Toledo

Agência FAPESP – Árvores de Decisão são ferramentas computacionais que conferem às máquinas a capacidade de fazer previsões com base na análise de dados históricos. A técnica pode, por exemplo, auxiliar o diagnóstico médico ou a análise de risco de aplicações financeiras.

Mas, para ter a melhor previsão, é necessário o melhor programa gerador de Árvores de Decisão. Para alcançar esse objetivo, pesquisadores do Instituto de Ciências Matemáticas e de Computação (ICMC) da Universidade de São Paulo (USP), em São Carlos, se inspiraram na teoria evolucionista de Charles Darwin.

“Desenvolvemos um algoritmo evolutivo, ou seja, que mimetiza o processo de evolução humana para gerar soluções”, disse Rodrigo Coelho Barros, doutorando do Laboratório de Computação Bioinspirada (BioCom) do ICMC e bolsista da FAPESP.

A computação evolutiva, explicou Barros, é uma das várias técnicas bioinspiradas, ou seja, que buscam na natureza soluções para problemas computacionais. “É notável como a natureza encontra soluções para problemas extremamente complicados. Não há dúvidas de que precisamos aprender com ela”, disse Barros.

Segundo Barros, o software desenvolvido em seu doutorado é capaz de criar automaticamente programas geradores de Árvores de Decisão. Para isso, faz cruzamentos aleatórios entre os códigos de programas já existentes gerando “filhos”.

“Esses ‘filhos’ podem eventualmente sofrer mutações e evoluir. Após um tempo, é esperado que os programas de geração de Árvores de Decisão evoluídos sejam cada vez melhores e nosso algoritmo seleciona o melhor de todos”, afirmou Barros.

Mas enquanto o processo de seleção natural na espécie humana leva centenas ou até milhares de anos, na computação dura apenas algumas horas, dependendo do problema a ser resolvido. “Estabelecemos cem gerações como limite do processo evolutivo”, contou Barros.

Inteligência artificial

Em Ciência da Computação, é denominada heurística a capacidade de um sistema fazer inovações e desenvolver técnicas para alcançar um determinado fim.

O software desenvolvido por Barros se insere na área de hiper-heurísticas, tópico recente na área de computação evolutiva que tem como objetivo a geração automática de heurísticas personalizadas para uma determinada aplicação ou conjunto de aplicações.

“É um passo preliminar em direção ao grande objetivo da inteligência artificial: o de criar máquinas capazes de desenvolver soluções para problemas sem que sejam explicitamente programadas para tal”, detalhou Barros.

O trabalho deu origem ao artigo A Hyper-Heuristic Evolutionary Algorithm for Automatically Designing Decision-Tree Algorithms, premiado em três categorias na Genetic and Evolutionary Computation Conference (GECCO), maior evento da área de computação evolutiva do mundo, realizado em julho na Filadélfia, Estados Unidos.

Além de Barros, também são autores do artigo os professores André Carlos Ponce de Leon Ferreira de Carvalho, orientador da pesquisa no ICMC, Márcio Porto Basgalupp, da Universidade Federal de São Paulo (Unifesp), e Alex Freitas, da University of Kent, no Reino Unido, que assumiu a co-orientação.

Os autores foram convidados a submeter o artigo para a revista Evolutionary Computation Journal, publicada pelo Instituto de Tecnologia de Massachusetts (MIT). “O trabalho ainda passará por revisão, mas, como foi submetido a convite, tem grande chance de ser aceito”, disse Barros.

A pesquisa, que deve ser concluída somente em 2013, também deu origem a um artigo publicado a convite no Journal of the Brazilian Computer Society, após ser eleito como melhor trabalho no Encontro Nacional de Inteligência Artificial de 2011.

Outro artigo, apresentado na 11ª International Conference on Intelligent Systems Design and Applications, realizada na Espanha em 2011, rendeu convite para publicação na revistaNeurocomputing.

Organisms Cope With Environmental Uncertainty by Guessing the Future (Science Daily)

ScienceDaily (Aug. 16, 2012) — In uncertain environments, organisms not only react to signals, but also use molecular processes to make guesses about the future, according to a study by Markus Arnoldini et al. from ETH Zurich and Eawag, the Swiss Federal Institute of Aquatic Science and Technology. The authors report in PLoS Computational Biology that if environmental signals are unreliable, organisms are expected to evolve the ability to take random decisions about adapting to cope with adverse situations.

Most organisms live in ever-changing environments, and are at times exposed to adverse conditions that are not preceded by any signal. Examples for such conditions include exposure to chemicals or UV light, sudden weather changes or infections by pathogens. Organisms can adapt to withstand the harmful effects of these stresses. Previous experimental work with microorganisms has reported variability in stress responses between genetically identical individuals. The results of the present study suggest that this variation emerges because individual organisms take random decisions, and such variation is beneficial because it helps organisms to reduce the metabolic costs of protection without compromising the overall benefits.

The theoretical results of this study can help to understand why genetically identical organisms often express different traits, an observation that is not explained by the conventional notion of nature and nurture. Future experiments will reveal whether the predictions made by the mathematical model are met in natural systems.

Modern culture emerged in Africa 20,000 years earlier than thought (L.A.Times)

By Thomas H. Maugh II

July 30, 2012, 1:54 p.m.

Border Cave artifactsObjects found in the archaeological site called Border Cave include a) a wooden digging stick; b) a wooden poison applicator; c) a bone arrow point decorated with a spiral incision filled with red pigment; d) a bone object with four sets of notches; e) a lump of beeswax; and f) ostrich eggshell beads and marine shell beads used as personal ornaments. (Francesco d’Errico and Lucinda Backwell/ July 30, 2012)
Modern culture emerged in southern Africa at least 44,000 years ago, more than 20,000 years earlier than anthropologists had previously believed, researchers reported Monday.

That blossoming of technology and art occurred at roughly the same time that modern humans were migrating fromAfrica to Europe, where they soon displaced Neanderthals. Many of the characteristics of the ancient culture identified by anthropologists are still present in hunter-gatherer cultures of Africa today, such as the San culture of southern Africa, the researchers said.

The new evidence was provided by an international team of researchers excavating at an archaeological site called Border Cave in the foothills of the Lebombo Mountains on the border of KwaZulu-Natal in South Africa and Swaziland. The cave shows evidence of occupation by human ancestors going back more than 200,000 years, but the team reported in two papers in the Proceedings of the National Academy of Sciences that they were able to accurately date their discoveries to 42,000 to 44,000 years ago, a period known as the Later Stone Age or the Upper Paleolithic Period in Europe.

Among the organic — and thus datable — artifacts the team found in the cave were ostrich eggshell beads, thin bone arrowhead points, wooden digging sticks, a gummy substance called pitch that was used to attach bone and stone blades to wooden shafts, a lump of beeswax likely used for the same purpose, worked pig tusks that were probably use for planing wood, and notched bones used for counting.

“They adorned themselves with ostrich egg and marine shell beads, and notched bones for notational purposes,” said paleoanthropologist Lucinda Blackwell of the University of Witwatersrand in South Africa, a member of the team. “They fashioned fine bone points for use as awls and poisoned arrowheads. One point is decorated with a spiral groove filled with red ochre, which closely parallels similar marks that San make to identify their arrowheads when hunting.”

The very thin bone points are “very good evidence” for the use of bows and arrows, said co-author Paola Villa, a curator at the University of Colorado Museum of Natural History. Some of the bone points were apparently coated with ricinoleic acid, a poison made from the castor bean. “Such bone points could have penetrated thick hides, but the lack of ‘knock-down’ power means the use of poison probably was a requirement for successful kills,” she said.

The discovery also represents the first time pitch-making has been documented in South Africa, Villa said. The process requires burning peeled bark in the absence of air. The Stone Age residents probably dug holes in the ground, inserted the bark, lit it on fire, and covered the holes with stones, she said.

University of Tennessee anthropologists find American heads are getting larger (University of Tennessee)

University of Tennessee at Knoxville

White Americans’ heads are getting bigger — that’s according to research by forensic anthropologists at the University of Tennessee, Knoxville

White Americans’ heads are getting bigger. That’s according to research by forensic anthropologists at the University of Tennessee, Knoxville.

Lee Jantz, coordinator of UT’s Forensic Anthropology Center (FAC); Richard Jantz, professor emeritus and former director of the FAC; and Joanne Devlin, adjunct assistant professor, examined 1,500 skulls dating back to the mid-1800s through the mid-1980s. They noticed U.S. skulls have become larger, taller and narrower as seen from the front and faces have become significantly narrower and higher.

The researchers cannot pinpoint a reason as to why American head shapes are changing and whether it is primarily due to evolution or lifestyle changes.

“The varieties of changes that have swept American life make determining an exact cause an endlessly complicated proposition,” said Lee Jantz. “It likely results from modified growth patterns because of better nutrition, lower infant and maternal mortality, less physical work, and a breakdown of former ethnic barriers to marriage. Which of these is paramount we do not know.”

The researchers found that the average height from the base to the top of the skull in men has increased by eight millimeters (0.3 inches). The skull size has grown by 200 cubic centimeters, a space equivalent to a tennis ball. In women, the corresponding increases are seven millimeters and 180 cubic centimeters.

Skull height has increased 6.8 percent since the late 1800s, while body height has increased 5.6 percent and femur length has only increased about 2 percent. Also, skull-height has continued to change whereas the overall heightening has recently slowed or stopped.

The scientists also noted changes that illustrate our population is maturing sooner. This is reflected in the earlier closing of a separation in the bone structure of the skull called the spheno-occipital synchondrosis, which in the past was thought to fuse at about age twenty. Richard Jantz and Natalie Shirley, an adjunct assistant professor in the FAC, have found the bone is fusing much earlier — 14 for girls and 16 for boys.

America’s obesity epidemic is the latest development that could affect skeletal shape but its precise effects are unclear.

“This might affect skull shape by changing the hormonal environment, which in turn could affect timing of growth and maturation,” said Richard Jantz. “We know it has an effect on the long bones by increasing muscle attachment areas, increasing arthritis at certain joints, especially the knee, and increasing the weight bearing capacity.”

The research only assessed Americans of European ancestry because they provided the largest sample sizes to work with. Richard Jantz said changes in skeletal structure are taking place in many parts of the world, but tend to be less studied. He said research has uncovered shifts in skull shape in Europe though it is not as dramatic as seen in the U.S.

The findings were presented on April 14 in Portland, Ore. at the annual meeting of the American Association of Physical Anthropologists

Walter Neves: o pai de Luzia (Fapesp)

Arqueólogo e antropólogo da USP conta como formulou uma teoria sobre a chegada do homem às Américas

MARCOS PIVETTA e RICARDO ZORZETTO | Edição 195 – Maio de 2012

© LEO RAMOS

Ele é o pai de Luzia, um crânio humano de 11 mil anos, o mais antigo até agora encontrado nas Américas, que pertenceu a um extinto povo de caçadores-coletores da região de Lagoa Santa, nos arredores de Belo Horizonte. O arqueólogo e antropólogo Walter Neves, coordenador do Laboratório de Estudos Evolutivos Humanos do Instituto de Biociências da Universidade de São Paulo (USP), não foi o responsável por ter resgatado esse antigo esqueleto de um sítio pré-histórico, mas foi graças a seus estudos que Luzia, assim batizada por ele, tornou-se o símbolo de sua polêmica teo-ria de povoamento das Américas: o modelo dos dois componentes biológicos.Formulada há mais de duas décadas, a teoria advoga que nosso continente foi colonizado por duas levas de Homo sapiensvindas da Ásia. A primeira onda migratória teria ocorrido há uns 14 mil anos e fora composta por indivíduos parecidos com Luzia, com morfologia não mongoloide, semelhante à dos atuais australianos e africanos, mas que não deixaram descendentes. A segunda leva teria entrado aqui há uns 12 mil anos e seus membros apresentavam o tipo físico característico dos asiáticos, dos quais os índios modernos derivam.

Nesta entrevista, Neves, um cientista tão aguerrido como popular, que gosta de uma boa briga acadêmica, fala de Luzia e de sua carreira.

Como surgiu seu interesse por ciência?
Venho de uma família pobre, de Três Pontas, Minas Gerais. Por alguma razão, aos 8 anos, eu já sabia que queria ser cientista. As 12 anos, que queria trabalhar com evolução humana. Não tenho explicação para isso.

Quando você veio para São Paulo?
Foi em 1970, depois da Copa. Migramos para São Bernardo, onde morei grande parte da minha vida.

Como era sua vida?
Todo mundo em casa tinha que trabalhar. A família era pequena. Era meu pai, minha mãe, eu e meu irmão, três anos mais velho. Quando chegamos a São Paulo, meu pai era pedreiro e minha mãe vendia Yakult na rua. Eu tinha de 12 para 13 anos. Um ano depois de chegar aqui, comecei a trabalhar. Vendia massas uma vez por semana numa barraca de feirantes do meu bairro. Meu primeiro emprego fixo foi de ajudante-geral na Malas Primicia, para fazer fechadura de mala. E eu odiava. Era chato, não exigia qualificação. Durou pouco. Um mês depois fui contratado na fábrica de turbinas de avião da Rolls-Royce, em São Bernardo. Eu me beneficiei muito desse ambiente, que era refinado, cheio de regras e de valorização da hierarquia. Acho que desenvolvi minha excelente capacidade administrativa nos anos que passei na Rolls-Royce. Tive uma formação burocrática de primeira. Todos os dias, quando a gente chegava à fábrica, tinha um quadro da rainha da Inglaterra e a gente tinha que fazer mesura. Eu achava o máximo. Para quem vivia no mato, era um upgrade de glamour na vida. Tinha de 13 para 14 anos.

O que fazia?
Comecei como office-boy e quando saí era assistente da diretoria técnica. A Rolls-Royce no Brasil recebia as turbinas para fazer reparos e revisão geral. Meu chefe era diretor dessa parte e eu o ajudava em tudo. Trabalhava oito horas por dia e estudava à noite. Estudei em escola pública e entrei na USP, em biologia, em 1976. Tínhamos um ensino médio público de excelente nível.

Por que escolheu biologia?
Sempre achei que o caminho para estudar evolução humana era estudar história. Numa visita à USP no colegial, conheci o Instituto de Pré-história, que não existe mais. O instituto fora fundado por Paulo Duarte e funcionava no prédio da Zoo-logia, onde hoje fica a Ecologia. Nessa visita, fui ao prédio da História atrás de informação sobre o curso e me disseram que, se eu fizesse história, não aprenderia nada sobre evolução humana. Descendo a rua do Matão, vi numa plaquinha escrito Instituto de Pré-história, onde conheci a arqueó-loga Dorath Uchôa. Lá vi as réplicas de hominídeos fósseis e esqueletos pré-históricos escavados nos sambaquis da costa brasileira. Então disse para Dorath: “Quero fazer arqueologia e estudar esqueleto”. E ela disse: “Não faça história. Ou você faz biologia ou medicina”. Medicina não dava porque era em tempo integral. Optei por biologia. Foi um bom negócio. Em 1978 fui contratado, ainda na graduação, pelo Instituto de Pré-história como técnico.

Você estava em que ano da faculdade?
Do segundo para o terceiro, acho. Quando concluí a licenciatura em 1980, fui contratado como pesquisador e professor. Não tinha concurso. Era indicação.

Era um instituto independente?
Sim. Depois foi anexado ao Museu de Arqueologia e Etnologia, o MAE. Na época se fazia arqueologia em três lugares na USP: no Instituto de Pré-história, o mais antigo, no MAE e no setor de arqueologia do Museu Paulista, no Ipiranga. No final dos anos 1980, os três foram unidos em um só. Trabalhei no Instituto de Pré-história como pesquisador de 1980 a 1985. Em 1982 fui fazer doutorado sanduíche na Universidade Stanford. Eu era autodidata, porque não havia no Brasil especialista nessa área. No Instituto de Pré-história, o material estava lá, a biblioteca estava lá, mas não havia quem me orientasse.

Eles não trabalhavam com evolução humana?
O instituto era muito pequeno, tinha dois pesquisadores, que se achavam donos daquilo. Quando fui contratado, outra arqueóloga, a Solange Caldarelli, também foi contratada. Formamos um par muito produtivo. Trabalhamos no interior de São Paulo com grupos de caçadores-coletores, na faixa cronológica dos 3 mil aos 5 mil anos. Foi com ela que me tornei um arqueólogo. Minha transformação de biólogo para antropólogo físico foi autodidata. O crescimento do nosso grupo de pesquisa começou a expor a mediocridade do trabalho feito no Instituto de Pré-história e no Brasil. Isso levou a uma guerra entre nós e o establishment. Em 1985 fomos expulsos da universidade.

Como assim?
Expulsos. Demitidos sumariamente.

O que alegavam?
Nada. Não tínhamos estabilidade. A maior parte dos docentes era contratada a título precário e fomos chutados do Instituto de Pré-história pelo pessoal mais velho.

Qual a diferença do antropólogo físico e do arqueólogo? Você se considera o que hoje?
Me considero antropólogo e arqueólogo. Na verdade me considero uma categoria que tem nos Estados Unidos e se chama evolutionary anthopologyst, antropólogo evolutivo. Mesmo entre os antropólogos evolutivos são raros os que têm uma trajetória em antropologia física, arqueologia e antropologia sociocultural. Nesse sentido tenho uma carreira única, que os meus colegas no exterior não entendiam. Eu fazia antropologia física e antropologia biológica e tinha projetos de arqueologia. Quando fui para a Amazônia, trabalhei com antropologia ecológica. Sou uma das únicas pessoas no mundo que passou por todas as antropologias possíveis. Se por um lado não sou bom em nenhuma delas, por outro eu tenho uma compreensão do humano muito mais multifacetada do que meus colegas.

O arqueólogo faz o trabalho de campo e o antropólogo físico espera o material?
O antropólogo físico pode ir a campo, mas não vai. Espera os arqueólogos entregarem o material para ele estudar. Me rebelei contra isso no Brasil. Falei: quero ser arqueólogo também. Nos Estados Unidos, no final dos anos 1980, se definiu uma área chamada bioarqueologia, composta por antropólogos físicos que não aguentavam mais ficar na dependência dos arqueólogos. Aqui de maneira independente me rebelei contra essa situação. E a demissão do instituto em 1985 foi traumática porque tínhamos sete anos de pesquisa de campo e perdemos tudo. De uma hora para outra minha carreira foi zerada. A sorte é que àquela altura eu tinha defendido meu doutorado.

Aqui?
Aqui na Biologia, mas sobre paleogenética. Fui para Stanford por meio de uma bolsa sanduíche de seis meses do CNPq. Para me manter em Stanford e em Berkeley, eu contava com meu salário daqui, na época dava US$ 250, e o [Luigi Luca] Cavalli-Sforza, com quem trabalhei, me pagava no laboratório mais US$ 250.

Ele é um grande pesquisador, mas da genética de populações.
Me perguntam por que não fui trabalhar com um antropólogo físico, se eu era autodidata na parte osteológica. Não fui porque o que o Cavalli-Sforza faz é fascinante. Ele une várias áreas do conhecimento. Na época eu estava matriculado no mestrado na Biologia, quem me orientava aqui era o [Oswaldo] Frota-Pessoa.

Que também é da genética.
Da genética, mas com uma visão muito abrangente do ser humano. Se o Frota não existisse, eu não teria conseguido fazer o mestrado. Ele percebeu minha situação e foi muito generoso. Quando eu estava terminando o trabalho em Stanford, o Cavalli-Sforza descobriu que eu estava fazendo mestrado, e não doutorado. Ele olhava para mim e dizia: “Como você pode estar fazendo mestrado se já tem diversas publicações, coordena dois projetos de arqueologia e tem sete estudantes? Não tem sentido. Vou mandar uma correspondência para o Frota-Pessoa sugerindo que você faça direto o doutorado”. Hoje isso é comum. Foi o que me salvou. Defendi o doutorado em dezembro de 1984 e, meses depois, fui demitido. A Solange Caldarelli saiu tão enojada com a academia que nunca mais quis saber de carreira universitária. Eu queria voltar para a academia. Aí surgiram três possibilidades. Uma era fazer um pós-doc em Harvard; outra um pós-doc na Universidade Estadual da Pensilvânia e uma terceira coisa, inesperada. Quando eu fui demitido disse para o Frota que ia para o exterior. Sabia que a minha condição ia ser sempre conflituosa com a arqueologia brasileira. Nessa época existia o programa integrado de genética, do CNPq, importante para o desenvolvimento da genética no Brasil, e o Frota coordenava alguns cursos itinerantes. Aí o Frota disse: “Agora que a gente ia ter um especialista em evolução humana você vai embora. Eu entendo, mas eu vou te convidar para, antes de ir para o exterior, você dar um curso itinerante pelo Brasil sobre evolução humana”. Dei o curso na Universidade Federal da Bahia, na Federal do Rio Grande do Norte, no Museu Goeldi e na Universidade de Brasília. Fiquei muito bem impressionado com o Goeldi. No último dia de curso no Goeldi, o diretor quis me conhecer. Falei da minha trajetória e que estava indo para os Estados Unidos. Ele me perguntou: “Não tem nada que possa demover você dessa ideia?” Eu disse: “Olha, Guilherme”, o nome dele é Guilherme de La Penha, “a única coisa que me faria ficar no Brasil seria ter a oportunidade de criar meu próprio centro de estudos, que pudesse ser interdisciplinar e não estivesse ligado nem à antropologia, nem à arqueologia”. E ele me convidou para criar lá o que na época se chamou de núcleo de biologia e ecologia humana. Aconteceu também uma coisa no nível pessoal que me levou a optar por Belém.

Isso em 1985?
Ainda em 1985. Um pouco antes de eu dar esse curso pelo Brasil, eu me apaixonei profundamente pela primeira vez. Me apaixonei pelo Wagner, a melhor coisa que aconteceu na minha vida. Se fosse para os Estados Unidos, dificilmente conseguiria levá-lo. Em Belém, seria mais fácil arrumar um emprego para ele e continuar o relacionamento. Por isso aceitei a ida para o Goeldi. Só que tive de me afastar dos esqueletos. Na Amazônia a última coisa do mundo que se pode fazer é trabalhar com esqueletos, porque eles não se preservam.

O que você fazia?
Comecei a me dedicar à antropologia ecológica.

E o que é antropologia ecológica?
Ela estuda as adaptações de sociedades tradicionais ao ambiente. Até então, era uma linha que os americanos trabalhavam muito na Amazônia. Como a nossa antropologia aqui é eminentemente estruturalista, e tem urticária de alguma coisa que seja biológica, essa linha nunca progrediu no Brasil. Aí pensei: “Bárbaro, vou comprar outra briga. Vou formar uma primeira geração em antropologia ecológica”. Grande parte das pesquisas sobre antropologia ecológica na Amazônia era feita com indígenas. Então decidi estudar as populações caboclas tradicionais.

Vocês publicaram um livro, não?
Publicamos a primeira grande síntese sobre a adaptação cabocla na Amazônia, que saiu aqui e no exterior. Coloquei alunos que trabalharam comigo na Amazônia para fazer doutorado no exterior.

Quais conclusões você destaca dessa síntese?
Estudando essas populações amazônicas tradicionais, ficou claro que todo mundo que chega lá, as ONGs principalmente, acha que eles têm problema de nutrição. De fato, eles têm um déficit de crescimento em relação aos padrões internacionais. Mas nosso trabalho mostrou que na verdade eles não têm deficiência de ingestão de carboidratos e de proteínas. O problema é parasitose.

Como você retornou para a USP?
Em 1988, pouco depois de mudar para a Amazônia, o Wagner foi diagnosticado  com Aids, e fizemos um trato. Quando ele chegasse na fase terminal, voltaríamos para São Paulo. Vim fazer um pós-doc na antropologia. Quando o Wagner morreu, em 1992, eu não queria mais voltar para a Amazônia e prestei dois concursos.

Você fazia pós-doc em antropologia na USP?
Sim, na Faculdade de Filosofia, Letras e Ciências Humanas. Aí prestei dois concursos. Um na Federal de Santa Catarina, na área de antropologia ecológica, mas eu queria ficar em São Paulo. Como eu já tinha feito em 1989 a primeira descoberta do que se tornou o meu modelo de ocupação das Américas, pensei: “Tenho que ir para um lugar em que possa me dedicar a isso e voltar a me concentrar em esqueletos humanos”. Aí surgiu uma vaga aqui no departamento, na área de evolução. Passei em ambos os lugares, mas optei por aqui. Sabia que poderia criar um centro de estudos evolutivos humanos que tivesse arqueologia, antropologia física, antropologia ecológica.

Como teve a ideia de criar  um modelo alternativo de colonização das Américas?
Um dia, o Guilherme de La Penha, diretor do Goeldi, me chamou e disse: “Olha, Walter, daqui a uma semana tenho de ir a um congresso em Estocolmo sobre arqueologia de salvamento. Preciso que você me substitua”. Eu disse: “Mas assim, em cima da bucha?” Então lembrei que Copenhague fica na rota de Estocolmo. Negociei com ele a permissão para passar uns cinco dias em Copenhague e conhecer a coleção Lund. Fiz a viagem e não só conheci como medi os crânios de Lagoa Santa da coleção Lund. Quando voltei, falei com um pesquisador da Argentina que passava um tempo no Goeldi, o Hector Pucciarelli, meu maior parceiro de pesquisa e o mais importante bioantropólogo da América do Sul. Propus que fizéssemos um trabalhinho com esse material. Na época estavam surgindo os trabalhos de Niède Guidon com conclusões que me pareciam loucura, como dizer que o homem estava nas Américas havia 30 mil anos. Minha ideia no trabalho sobre os crânios de Lund era mostrar que os primeiros americanos não eram diferentes dos índios atuais. Bom, imagina nossa cara quando vimos que os crânios de Lagoa Santa eram mais parecidos com os australianos e os africanos do que com os asiáticos. Entramos em pânico. Vimos que precisávamos de um modelo para explicar isso.

O que vocês fizeram então?
Alguns autores clássicos, dos anos 1940 e 1950, como o antropólogo francês Paul Rivet, já haviam reconhecido uma similaridade entre o material de Lagoa Santa e o da Austrália. Só que o Rivet propôs uma migração direta da Austrália para a América do Sul para explicar a semelhança. Mais tarde, com o avanço dos estudos de genética indígena, principalmente com o trabalho do (Francisco) Salzano, ficou claro que todos os marcadores genéticos daqui apontavam para a Ásia. Não havia similaridade com os australianos. Pensamos então em criar um modelo que explorasse essa dualidade morfológica. Não queríamos cair em desgraça como o Rivet e começamos a estudar a ocupação da Ásia. Descobrimos que lá, no final do Pleistoceno, também havia uma dualidade morfológica. Havia os pré-mongoloides e os mongoloides. Nossas populações de Lagoa Santa eram parecidas com os pré-mongoloides. Os índios atuais são parecidos com os mongoloides. Foi daí que surgiu a ideia de que a América foi ocupada por duas levas distintas: uma com morfologia generalizada, parecida com os africanos e os australianos; e outra parecida com os asiáticos. Nosso primeiro trabalho foi publicado na revista Ciência e Cultura, em 1989. A partir de 1991 começamos a publicar no exterior.

Você então formulou esse modelo antes de examinar o crânio da Luzia.
Dez anos antes. No Brasil vários museus tinham acervos da região de Lagoa Santa. Mas, como eu era o enfant gâté da arqueologia brasileira, não me davam acesso às coleções. Por isso fui estudar a coleção Lund. Só passei a ter acesso às coleções no Brasil a partir de 1995, quando algumas das pessoas que colocavam barreiras morreram. Um dos crânios que eu tinha mais curiosidade de estudar era o da Luzia.

Já tinha esse nome?
Não. Eu é que dei. A gente conhecia como esqueleto da Lapa Vermelha IV, nome do sítio em que foi encontrado. O sítio foi escavado pela missão franco-brasileira, coordenada pela madame Annette Emperaire. O esqueleto da Luzia foi achado nas etapas de 1974 e 1975. Mas a madame Emperaire morreu inesperadamente. Com exceção de um artigo que ela publicou, não tinha mais nada escrito sobre a Lapa Vermelha.

No artigo ela falava que o crânio era antigo?
Madame Emperaire achava que havia dois esqueletos na Lapa Vermelha: um mais recente e outro mais antigo, datado de mais de 12 mil anos, antes da cultura Clovis, ao qual pertenceria o crânio da Luzia. Só que o André Prous (arqueólogo francês que participou da missão e hoje é professor da UFMG) revisou as anotações dela e percebeu que o crânio era do esqueleto mais recente, que estava cerca de um metro acima. Luzia não foi sepultada, foi depositada no chão do abrigo, numa fenda. Prous demonstrou que o crânio tinha rolado e caído num buraco de uma raiz de gameleira que tinha apodrecido. Portanto, o crânio pertencia a esses restos que estavam na faixa dos 11 mil anos de idade. Madame Emperaire morreu acreditando que tinha encontrado uma evidência pré-Clovis na América do Sul, o crânio que apelidei de Luzia.

Onde estava o crânio da Luzia quando você o examinou?
Sempre esteve no Museu Nacional do Rio de Janeiro, mas as informações não. O museu era a instituição parceira da missão francesa.

O povo de Luzia era restrito a Lagoa Santa?
Lagoa Santa é uma situação excepcional. No artigo síntese do meu trabalho, que publiquei em 2005 na revista PNAS, usamos 81 crânios da região. Para se ter uma ideia de como são raros os esqueletos com mais de 7 mil anos no nosso continente, os Estados Unidos e o Canadá, juntos, têm cinco. Temos o que chamamos de fossil powerno que se refere à questão da origem do homem americano. Estudei também algum material de outras partes do Brasil, do Chile, do México e da Flórida e demonstrei que a morfologia pré-mongoloide não era uma peculiaridade de Lagoa Santa. Acredito que os não mongoloides devem ter entrado lá em cima por volta de uns 14 mil anos e os mongoloides por volta de 10 ou 12 mil anos. Na verdade, a morfologia mongoloide na Ásia é muito recente. Imagino que, entre uma e outra, não deve ter mais do que 2 ou 3 mil anos de diferença. Mas é puro chute.

Dois ou 3 mil anos são o suficiente para mudar o fenótipo?
Foram o suficiente para mudar na Ásia. Hoje está mais ou menos claro que a morfologia mongoloide é resultado da exposição das populações que saíram da África, com uma morfologia tipicamente africana, e se submeteram ao frio extremo da Sibéria. Meu modelo não é totalmente aceito por alguns colegas, inclusive argentinos. Eles acham que o processo de mongolização ocorreu na Ásia e na América de forma paralela e independente. Não vamos resolver o assunto por falta de amostras. Mas, em evolução, a gente sempre opta pela lei da parcimônia. Você escolhe o modelo que envolve o menor número de passos evolutivos para explicar o que encontrou. Pela regra da parcimônia, meu modelo é melhor do que outros, que dependem de ter havido dois eventos evolutivos paralelos e independentes. Mas há oposição ao meu modelo.

De quem?
Dos geneticistas. Mas acho que não dá para enterrar o meu modelo com esse tipo de dado. Não há razão para o DNA mitocondrial, por exemplo, se comportar evolutivamente do mesmo jeito que a morfologia craniana. Onde geneticistas veem certa homogeneidade do ponto de vista do DNA, posso encontrar fenótipos diferentes.

Também tem o argumento de que teria havido uma só leva migratória para as Américas, já composta por uma população com tipos mongoloides e não mongoloides como Luzia.
Existe essa terceira possibilidade. Mas teria que ter havido uma taxa de deriva genética assombrosa para explicar a colonização dessa forma. Por que teria desaparecido um fenótipo e ficado apenas o outro? Das opções ao meu modelo, acho essa a mais fraca.

Mas como você explica o desaparecimento da morfologia de Luzia?
Na verdade, descobrimos nos últimos anos que ela não desapareceu. Quando propusemos o modelo, achávamos que uma população tinha substituído a outra. Mas em 2003 ou 2004 um colega argentino mostrou que uma tribo mexicana que viveu isolada do resto dos índios, num território hoje pertencente à Califórnia, manteve a morfologia não mongoloide até o século XVI, quando os europeus chegaram pelo mar. Estamos descobrindo também que os índios botocudos, do Brasil Central, mantiveram essa morfologia até o século XIX. Quando se estuda a etnografia dos botocudos, vê-se que eles se mantiveram como caçadores-coletores até o fim do século XIX. Estavam cercados por outros grupos indígenas, com os quais tinham relação belicosa. O cenário foi esse. Sobrou um pouquinho da morfologia não mongoloide até recentemente.

O que você acha do trabalho da arqueóloga Niède Guidon no Parque Nacional Serra da Capivara? Para ela, o homem chegou ao Piauí há 50 mil, talvez 100 mil anos.
Mas cadê as publicações? Ela publicou uma nota na Nature nos anos 1990 e estamos esperando as publicações. Eu e a Niède fomos inimigos mortais por 20 anos. Uns anos atrás, a gente fumou o cachimbo da paz. Já estive no Piauí algumas vezes e até publicamos trabalhos sobre esqueletos de lá. No parque havia as duas morfologias de crânio. É muito interessante. Tive uma boa formação em análise de indústria da pedra lascada. A Niède abriu toda a coleção lítica para mim e o Astolfo Araujo (hoje no MAE). Saí 99,9% convencido do fato de que houve ali uma ocupação humana com mais de 30 mil anos. Mas tenho esse 0,1% de dúvida, que é muito significativo.

O que seria preciso para acabar com a dúvida?
A Niède deveria convidar os melhores especialistas internacionais em tecnologia lítica para ver o material e publicar os resultados das análises. Se ela estiver certa, teremos de jogar tudo que sabemos fora. Meu trabalho não terá servido para nada. Mas, graças a Deus, não só o meu, o de todo mundo.

When It Comes to Accepting Evolution, Gut Feelings Trump Facts (Science Daily)

ScienceDaily (Jan. 19, 2012) — For students to accept the theory of evolution, an intuitive “gut feeling” may be just as important as understanding the facts, according to a new study.

In an analysis of the beliefs of biology teachers, researchers found that a quick intuitive notion of how right an idea feels was a powerful driver of whether or not students accepted evolution — often trumping factors such as knowledge level or religion.

“The whole idea behind acceptance of evolution has been the assumption that if people understood it — if they really knew it — they would see the logic and accept it,” said David Haury, co-author of the new study and associate professor of education at Ohio State University.

“But among all the scientific studies on the matter, the most consistent finding was inconsistency. One study would find a strong relationship between knowledge level and acceptance, and others would find no relationship. Some would find a strong relationship between religious identity and acceptance, and others would find less of a relationship.”

“So our notion was, there is clearly some factor that we’re not looking at,” he continued. “We’re assuming that people accept something or don’t accept it on a completely rational basis. Or, they’re part of a belief community that as a group accept or don’t accept. But the findings just made those simple answers untenable.”

Haury and his colleagues tapped into cognitive science research showing that our brains don’t just process ideas logically — we also rely on how true something feels when judging an idea.

“Research in neuroscience has shown that when there’s a conflict between facts and feeling in the brain, feeling wins,” he says.

The researchers framed a study to determine whether intuitive reasoning could help explain why some people are more accepting of evolution than others. The study, published in the Journal of Research in Science Teaching, included 124 pre-service biology teachers at different stages in a standard teacher preparation program at two Korean universities.

First, the students answered a standard set of questions designed to measure their overall acceptance of evolution. These questions probed whether students generally believed in the main concepts and scientific findings that underpin the theory.

Then the students took a test on the specific details of evolutionary science. To show their level of factual knowledge, students answered multiple-choice and free-response questions about processes such as natural selection. To gauge their “gut” feelings about these ideas, students wrote down how certain they felt that their factually correct answers were actually true.

The researchers then analyzed statistical correlations to see whether knowledge level or feeling of certainty best predicted students’ overall acceptance of evolution. They also considered factors such as academic year and religion as potential predictors.

“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts.

When trying to explain the patterns of whether people believe in evolution or not, “the results show that if we consider both feeling and knowledge level, we can explain much more than with knowledge level alone,” said Minsu Ha, lead author on the paper and a Ph.D. candidate in the School of Teaching and Learning.

In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For the subjects of this study, belonging to a religion had almost no additional impact on beliefs about evolution, beyond subjects’ feelings of certainty.

These results also provide a useful way of looking at the perceived conflict between religion and science when it comes to teaching evolution, according to Haury. “Intuitive cognition not only opens a new door to approach the issue,” he said, “it also gives us a way of addressing that issue without directly questioning religious views.”

When choosing a setting for their study, the team found that Korean teacher preparation programs were ideal. “In Korea, people all take the same classes over the same time period and are all about the same age, so it takes out a lot of extraneous factors,” said Haury. “We wouldn’t be able to find a sample group like this in the United States.”

Unlike in the U.S., about half of Koreans do not identify themselves as belonging to any particular religion. But according to Ha, who is from Korea, certain religious groups consider the topic of evolution just as controversial as in the U.S.

To ensure that their results were relevant to U.S. settings, the researchers compared how the Korean students did on the knowledge tests with previous studies of U.S. students. “We found that the both groups were comparable in terms of the overall performance,” said Haury.

For teaching evolution, the researchers suggest using exercises that allow students to become aware of their brains’ dual processing. Knowing that sometimes what their “gut” says is in conflict with what their “head” knows may help students judge ideas on their merits.

“Educationally, we think that’s a place to start,” said Haury. “It’s a concrete way to show them, look — you can be fooled and make a bad decision, because you just can’t deny your gut.”

Ha and Haury collaborated on this study with Ross Nehm, associate professor of education at the Ohio State University. The research was funded by the National Science Foundation.

Into the mind of a Neanderthal (New Scientist)

18 January 2012
Magazine issue 2847

Neanderthals shared about 99.84 per cent of their DNA with us <i>(Image: Action Press/Rex Features)</i>Neanderthals shared about 99.84 per cent of their DNA with us (Image: Action Press/Rex Features)

What would have made them laugh? Or cry? Did they love home more than we do? Meet the real Neanderthals

A NEANDERTHAL walks into a bar and says… well, not a lot, probably. Certainly he or she could never have delivered a full-blown joke of the type modern humans would recognise because a joke hinges on surprise juxtapositions of unexpected or impossible events. Cognitively, it requires quite an advanced theory of mind to put oneself in the position of one or more of the actors in that joke – and enough working memory (the ability to actively hold information in your mind and use it in various ways).

So does that mean our Neanderthal had no sense of humour? No: humans also recognise the physical humour used to mitigate painful episodes – tripping, hitting our heads and so on – which does not depend on language or symbols. So while we could have sat down with Neanderthals and enjoyed the slapstick of The Three Stooges or Lee Evans, the verbal complexities of Twelfth Night would have been lost on them.

Humour is just one aspect of Neanderthal life we have been plotting for some years in our mission to make sense of their cognitive life. So what was it like to be a Neanderthal? Did they feel the same way we do? Did they fall in love? Have a bad day? Palaeoanthropologists now know a great deal about these ice-age Europeans who flourished between 200,000 and 30,000 years ago. We know, for example, that Neanderthals shared about 99.84 per cent of their DNA with us, and that we and they evolved separately for several hundred thousand years. We also know Neanderthal brains were a bit larger than ours and were shaped a bit differently. And we know where they lived, what they ate and how they got it.

Skeletal evidence shows that Neanderthal men, women and children led very strenuous lives, preoccupied with hunting large mammals. They often made tactical use of terrain features to gain as much advantage as possible, but administered the coup de grace with thrusting spears. Based on their choice of stone for tools, we know they almost never travelled outside small home territories that were rarely over 1000 square kilometres.

The Neanderthal style of hunting often resulted in injuries, and the victims were often nursed back to health by others. But few would have survived serious lower body injuries, since individuals who could not walk might well have been abandoned. It looks as if Neanderthals had well-developed way-finding and tactical abilities, and empathy for group members, but also that they made pragmatic decisions when necessary.

Looking closely at the choices Neanderthals made when they manufactured and used tools shows that they organised their technical activities much as artisans, such as blacksmiths, organise their production. Like blacksmiths, they relied on “expert” cognition, a form of observational learning and practice acquired through apprenticeship that relies heavily on long-term procedural memory.

The only obvious difference between Neanderthal technical thinking and ours lay in innovation. Although Neanderthals invented the practice of hafting stone points onto spears, this was one of very few innovations over several hundred thousand years. Active invention relies on thinking by analogy and a good amount of working memory, implying they may have had a reduced capacity in these respects. Neanderthals may have relied more heavily than we do on well-learned procedures of expert cognition.

As for the neighbourhood, the size and distribution of archaeological sites shows that Neanderthals spent their lives mostly in small groups of five to 10 individuals. Several such groups would come together briefly after especially successful hunts, suggesting that Neanderthals also belonged to larger communities but that they seldom made contact with people outside those groupings.

Many Neanderthal sites have rare pieces of high-quality stone from more distant sources (more than 100 kilometres), but not enough to indicate trade or even regular contact with other communities. A more likely scenario is that an adolescent boy or girl carried the material with them when they attached themselves to a new community. The small size of Neanderthal territories would have made some form of “marrying out” essential.

We can also assume that Neanderthals had some form of marriage because pair-bonding between men and women, and joint provisioning for their offspring, had been a feature of hominin social life for over a million years. They also protected corpses by covering them with rocks or placing them in shallow pits, suggesting the kinds of intimate, embodied social and cognitive interaction typical of our own family life.

But the Neanderthals’ short lifespan – few lived past 35 – meant that other features of our more recent social past were absent: elders, for example, were rare. And they almost certainly lacked the cognitive abilities for dealing with strangers that evolved in modern humans, who lived in larger groups numbering in the scores and belonged to larger communities in the hundreds or more. They also established and maintained contacts with distant groups.

One cognitive ability that evolved in modern humans as a result was the “cheater detection” ability described by evolutionary psychologist Leda Cosmides, at the University of California, Santa Barbara. Another was an ability to judge the value of one commodity in terms of another, what anthropologist Alan Page Fiske at the University of California, Los Angeles, calls the “market pricing” ability. Both are key reasoning skills that evolved to allow interaction with acquaintances and strangers, neither of which was a regular feature of Neanderthal home life.

There are good circumstantial reasons for thinking that Neanderthals had language, with words and some kind of syntax; some of their technology and hunting tactics would have been difficult to learn and execute without it. Moreover, Neanderthal brains had a well-developed Broca’s area, and their DNA includes the FOXP2 gene carried by modern humans, which is involved in speech production. Unfortunately, none of this reveals anything specific about Neanderthal language. It could have been very or only slightly different, we just don’t know.

Having any sort of language could also have exposed Neanderthals to problems modern humans face, such as schizophrenia, says one theory which puts the disease down to coordination problems between the brain’s left and right hemispheres.

But while Neanderthals would have had a variety of personality types, just as we do, their way of life would have selected for an average profile quite different from ours. Jo or Joe Neanderthal would have been pragmatic, capable of leaving group members behind if necessary, and stoical, to deal with frequent injuries and lengthy convalescence. He or she had to be risk tolerant for hunting large beasts close up; they needed sympathy and empathy in their care of the injured and dead; and yet were neophobic, dogmatic and xenophobic.

So we could have recognised and interacted with Neanderthals, but we would have noticed these significant cognitive differences. They would have been better at well-learned, expert cognition than modern humans, but not as good at the development of novel solutions. They were adept at intimate, small-scale social cognition, but lacked the cognitive tools to interact with acquaintances and strangers, including the extensive use of symbols.

In the final count, when Neanderthals and modern humans found themselves competing across the European landscape 30,000 years ago, those cognitive differences may well have been decisive in seeing off the Neanderthals.

Profile
Thomas Wynn is a professor of anthropology and Frederick L. Coolidge is a professor of psychology at the University of Colorado, Colorado Springs. For the past decade they have worked on the evolution of cognition. Their new book is How to Think Like a Neandertal (Oxford University Press, 2012)

Climatic fluctuations drove key events in human evolution (University of Liverpool)

21-Sep-2011 – University of Liverpool

Research at the University of Liverpool has found that periods of rapid fluctuation in temperature coincided with the emergence of the first distant relatives of human beings and the appearance and spread of stone tools.

Dr Matt Grove from the School of Archaeology, Classics and Egyptology reconstructed likely responses of human ancestors to the climate of the past five million years using genetic modelling techniques. When results were mapped against the timeline of human evolution, Dr Grove found that key events coincided with periods of high variability in recorded temperatures.

Dr Grove said: “The study confirmed that a major human adaptive radiation – a pattern whereby the number of coexisting species increases rapidly before crashing again to near previous levels – coincided with an extended period of climatic fluctuation. Following the onset of high climatic variability around 2.7 million years ago a number of new species appear in the fossil record, with most disappearing by 1.5 million years ago. The first stone tools appear at around 2.6 million years ago, and doubtless assisted some of these species in responding to the rapidly changing climatic conditions.

“By 1.5 million years ago we are left with a single human ancestor – Homo erectus. The key to the survival of Homo erectus appears to be its behavioural flexibility – it is the most geographically widespread species of the period, and endures for over one and a half million years. Whilst other species may have specialized in environments that subsequently disappeared – causing their extinction – Homo erectus appears to have been a generalist, able to deal with many climatic and environmental contingencies.”

Dr Grove’s research is the first to explicitly model ‘Variability Selection’, an evolutionary process proposed by Professor Rick Potts in the late 1990s, and supports the pervasive influence of this process during human evolution. Variability selection suggests that evolution, when faced with rapid climatic fluctuation, should respond to the range of habitats encountered rather than to each individual habitat in turn; the timeline of variability selection established by Dr Grove suggests that Homo erectus could be a product of exactly this process.

Linking climatic fluctuation to the evolutionary process has implications for the current global climate change debate. Dr Grove said: “Though often discussed under the banner term of ‘global warming’, what we see in many areas of the world today is in fact an increased annual range of temperatures and conditions; this means in particular that third world human populations, many living in what are already marginal environments, will face ever more difficult situations. The current pattern of human-induced climate change is unlike anything we have seen before, and is disproportionately affecting areas whose inhabitants do not have the technology required to deal with it.”

The research is published in The Journal of Human Evolution and The Journal of Archaeological Science.

Philosophers Notwithstanding, Kansas School Board Redefines Science (N.Y. Times)

By DENNIS OVERBYE
Published: November 15, 2005

Once it was the left who wanted to redefine science.

In the early 1990’s, writers like the Czech playwright and former president Vaclav Havel and the French philosopher Bruno Latour proclaimed “the end of objectivity.” The laws of science were constructed rather than discovered, some academics said; science was just another way of looking at the world, a servant of corporate and military interests. Everybody had a claim on truth.

The right defended the traditional notion of science back then. Now it is the right that is trying to change it.

On Tuesday, fueled by the popular opposition to the Darwinian theory of evolution, the Kansas State Board of Education stepped into this fraught philosophical territory. In the course of revising the state’s science standards to include criticism of evolution, the board promulgated a new definition of science itself.

The changes in the official state definition are subtle and lawyerly, and involve mainly the removal of two words: “natural explanations.” But they are a red flag to scientists, who say the changes obliterate the distinction between the natural and the supernatural that goes back to Galileo and the foundations of science.

The old definition reads in part, “Science is the human activity of seeking natural explanations for what we observe in the world around us.” The new one calls science “a systematic method of continuing investigation that uses observation, hypothesis testing, measurement, experimentation, logical argument and theory building to lead to more adequate explanations of natural phenomena.”

Adrian Melott, a physics professor at the University of Kansas who has long been fighting Darwin’s opponents, said, “The only reason to take out ‘natural explanations’ is if you want to open the door to supernatural explanations.”

Gerald Holton, a professor of the history of science at Harvard, said removing those two words and the framework they set means “anything goes.”

The authors of these changes say that presuming the laws of science can explain all natural phenomena promotes materialism, secular humanism, atheism and leads to the idea that life is accidental. Indeed, they say in material online at kansasscience2005.com, it may even be unconstitutional to promulgate that attitude in a classroom because it is not ideologically “neutral.”

But many scientists say that characterization is an overstatement of the claims of science. The scientist’s job description, said Steven Weinberg, a physicist and Nobel laureate at the University of Texas, is to search for natural explanations, just as a mechanic looks for mechanical reasons why a car won’t run.

“This doesn’t mean that they commit themselves to the view that this is all there is,” Dr. Weinberg wrote in an e-mail message. “Many scientists (including me) think that this is the case, but other scientists are religious, and believe that what is observed in nature is at least in part a result of God’s will.”

The opposition to evolution, of course, is as old as the theory itself. “This is a very long story,” said Dr. Holton, who attributed its recent prominence to politics and the drive by many religious conservatives to tar science with the brush of materialism.

How long the Kansas changes will last is anyone’s guess. The state board tried to abolish the teaching of evolution and the Big Bang in schools six years ago, only to reverse course in 2001.

As it happened, the Kansas vote last week came on the same day that voters in Dover, Pa., ousted the local school board that had been sued for introducing the teaching of intelligent design.

As Dr. Weinberg noted, scientists and philosophers have been trying to define science, mostly unsuccessfully, for centuries.

When pressed for a definition of what they do, many scientists eventually fall back on the notion of falsifiability propounded by the philosopher Karl Popper. A scientific statement, he said, is one that can be proved wrong, like “the sun always rises in the east” or “light in a vacuum travels 186,000 miles a second.” By Popper’s rules, a law of science can never be proved; it can only be used to make a prediction that can be tested, with the possibility of being proved wrong.

But the rules get fuzzy in practice. For example, what is the role of intuition in analyzing a foggy set of data points? James Robert Brown, a philosopher of science at the University of Toronto, said in an e-mail message: “It’s the widespread belief that so-called scientific method is a clear, well-understood thing. Not so.” It is learned by doing, he added, and for that good examples and teachers are needed.

One thing scientists agree on, though, is that the requirement of testability excludes supernatural explanations. The supernatural, by definition, does not have to follow any rules or regularities, so it cannot be tested. “The only claim regularly made by the pro-science side is that supernatural explanations are empty,” Dr. Brown said.

The redefinition by the Kansas board will have nothing to do with how science is performed, in Kansas or anywhere else. But Dr. Holton said that if more states changed their standards, it could complicate the lives of science teachers and students around the nation.

He added that Galileo – who started it all, and paid the price – had “a wonderful way” of separating the supernatural from the natural. There are two equally worthy ways to understand the divine, Galileo said. “One was reverent contemplation of the Bible, God’s word,” Dr. Holton said. “The other was through scientific contemplation of the world, which is his creation.

“That is the view that I hope the Kansas school board would have adopted.”

Genes, germs and the origins of politics (New Scientist)

NS 2813: Genes, germs and the origins of politics

* 18 May 2011 by Jim Giles

A controversial new theory claims fear of infection makes the difference between democracy and dictatorship

COMPARE these histories. In Britain, democracy evolved steadily over hundreds of years. During the same period, people living in what is now Somalia had many rulers, but almost all deprived them of the chance to vote. It’s easy to find other stark contrasts. Citizens of the United States can trace their right to vote back to the end of the 18th century. In Syria, many citizens cannot trace their democratic rights anywhere – they are still waiting for the chance to take part in a meaningful election.

Conventional explanations for the existence of such contrasting political regimes involve factors such as history, geography, and the economic circumstances and culture of the people concerned, to name just a few. But the evolutionary biologist Randy Thornhill has a different idea. He says that the nature of the political system that holds sway in a particular country – whether it is a repressive dictatorship or a liberal democracy – may be determined in large part by a single factor: the prevalence of infectious disease.

It’s an idea that many people will find outrageously simplistic. How can something as complex as political culture be explained by just one environmental factor? Yet nobody has managed to debunk it, and its proponents are coming up with a steady flow of evidence in its favour. “It’s rather astonishing, and it could be true,” says Carlos Navarrete, a psychologist at the Michigan State University in East
Lansing.

Thornhill is no stranger to controversy, having previously co-authored A Natural History of Rape, a book proposing an evolutionary basis for rape. His iconoclastic theory linking disease to politics was inspired in part by observations of how an animal’s development and behaviour can respond rapidly to physical dangers in a region, often in unexpected ways. Creatures at high risk of being eaten by predators, for example, often reach sexual maturity at a younger age than genetically similar creatures in a safer environment, and are more likely to breed earlier in their lives. Thornhill wondered whether threats to human lives might have similarly influential consequences to our psychology.

The result is a hypothesis known as the parasite-stress model, which Thornhill developed at the University of New Mexico, Albuquerque, with his colleague Corey Fincher.

 

 

Xenophobic instincts

The starting point for Thornhill and Fincher’s thinking is a basic human survival instinct: the desire to avoid illness. In a region where disease is rife, they argue, fear of contagion may cause people to avoid outsiders, who may be carrying a strain of infection to which they have no immunity. Such a mindset would tend to make a community as a whole xenophobic, and might also discourage interaction between the various groups within a society – the social classes, for instance – to prevent unnecessary contact that might spread disease. What is more, Thornhill and Fincher argue, it could encourage people to conform to social norms and to respect authority, since adventurous behaviour may flout rules of conduct set in place to prevent contamination.

Taken together, these attitudes would discourage the rich and influential from sharing their wealth and power with those around them, and inhibit the rest of the population from going against the status quo and questioning the authority of those above them. This is clearly not a situation conducive to democracy. When the threat of disease eases, however, these influences no longer hold sway, allowing forces that favour a more democratic social order to come to the fore.

That’s the idea, anyway. But where is the evidence?

The team had some initial support from earlier studies that had explored how a fear of disease affects individual attitudes. In 2006, for example, Navarrete found that when people are prompted to think about disgusting objects, such as spoilt food, they become more likely to express nationalistic values and show a greater distrust of foreigners (Evolution and Human Behavior, vol 27, p 270). More recently, a team from Arizona State University in Tempe found that reading about contagious illnesses made people less adventurous and open to new experiences, suggesting that they have become more inward looking and conformist (Psychological Science, vol 21, p 440).

Temporarily shifting individual opinions is one thing, but Thornhill and Fincher needed to show that these same biases could change the social outlook of a whole society. Their starting point for doing so was a description of cultural attitudes called the “collectivist-individualist” scale. At one end of this scale lies the collectivist outlook, in which people place the overall good of society ahead of the freedom of action of the individuals within it. Collectivist societies are often, though not exclusively, characterised by a greater respect for authority – if it’s seen as being necessary for the greater good. They also tend to be xenophobic and conformist. At the other end there is the individualist viewpoint, which has more emphasis on openness and freedom for the individual.

Pathogen peril

In 2008, the duo teamed up with Damian Murray and Mark Schaller of the University of British  Columbia in Vancouver, Canada, to test the idea that societies with more pathogens would be more collectivist. They rated people in 98 different nations and regions, from Estonia  to Ecuador, on the collectivist-individualist scale, using data from questionnaires and studies of linguistic cues that can betray a social outlook. Sure enough, they saw a correlation: the greater the threat of disease in a region, the more collectivist people’s attitudes were (Proceedings of the Royal Society B, vol 275, p 1279). The correlation remained even when they controlled for potential confounding factors, such as wealth and urbanisation.

A study soon followed showing similar patterns when comparing US states. In another paper, Murray and Schaller examined a different set of data and showed that cultural differences in one collectivist trait – conformity – correlate strongly with disease prevalence (Personality and Social Psychology Bulletin, vol 37, p 318).

Thornhill and Fincher’s next challenge was to find evidence linking disease prevalence not just with cultural attitudes but with the political systems they expected would go with them. To do so, they used a 66-point scale of pathogen prevalence, based on data assembled by the Global Infectious Diseases and Epidemiology Online Network. They then compared their data set with indicators that assess the politics of a country. Democracy is a tough concept to quantify, so the team looked at a few different measures, including the Freedom House Survey, which is based on the subjective judgements of a team of political scientists working for an independent American think tank, and the Index of Democratization, which is based on estimates of voter participation (measured by how much of a population cast their votes and the number of referendums offered to a population) and the amount of competition between political parties.

The team’s results, published in 2009, showed that each measure varied strongly with pathogen prevalence, just as their model predicted (Biological Reviews, vol 84, p 113). For example, when considering disease prevalence, Somalia is 22nd on the list of nations, while the UK comes in 177th. The two countries come out at opposite ends of the democratic scale (see “An infectious idea”).

Importantly, the relationship still holds when you look at historical records of pathogen prevalence. This, together with those early psychological studies of immediate reactions to disease, suggests it is a nation’s health driving its political landscape, and not the other way around, according to the team.

Last year, they published a second paper that used more detailed data of the diseases prevalent in each region. They again found that measures of collectivism and democracy correlate with the presence of diseases that are passed from human to human – though not with the prevalence of diseases transmitted directly from animals to humans, like rabies (Evolutionary Psychology, vol 8, p 151). Since collectivist behaviours would be less important for preventing such infections, this finding fits with Thornhill and Fincher’s hypothesis.

“Thornhill’s work strikes me as interesting and promising,” says Ronald Inglehart, a political scientist at the University of Michigan in Ann Arbor who was unaware of it before we contacted him. He notes that it is consistent with his own finding that a society needs to have a degree of economic security before democracy can develop. Perhaps this goes hand in hand with a reduction in disease prevalence to signal the move away from collectivism, he suggests.

Inglehart’s comments nevertheless highlights a weakness in the evidence so far assembled in support of the parasite-stress model. An association between disease prevalence and democracy does not necessarily mean that one drives the other. Some other factor may drive both the prevalence of disease in an area and its political system. In their 2009 paper, Thornhill and Fincher managed to eliminate some of the possible “confounders”. For example, they showed that neither a country’s overall wealth nor the way it is distributed can adequately explain the link between the prevalence of disease there and how democratic it is.

But many other possibilities remain. For example, pathogens tend to be more prevalent in the tropics, so perhaps warmer climates encourage collectivism. Also, many of the nations that score high for disease and low for democracy are in sub-Saharan Africa, and have a history of having been colonised, and of frequent conflict and foreign exploitation since independence. Might the near-constant threat of war better explain that region’s autocratic governments? There’s also the possibility that education and literacy would have an impact, since better educated people may be more likely to question authority and demand their rights to a democracy. Epidemiologist Valerie Curtis of the London School of Hygiene and Tropical Medicine thinks such factors might be the ones that count, and says the evidence so far does not make the parasite-stress theory any more persuasive than these explanations.

Furthermore, some nations buck the trend altogether. Take the US and Syria, for example: they have sharply contrasting political systems but an almost identical prevalence of disease. Though even the harshest critic of the theory would not expect a perfect correlation, such anomalies require some good explanations.

Also lacking so far in their analysis is a coherent account of how historical changes in the state of public health are linked to political change. If Thornhill’s theory is correct, improvements in a nation’s health should lead to noticeable changes in social outlook. Evidence consistent with this idea comes from the social revolution of the 1960s in much of western Europe and North America, which involved a shift from collectivist towards individualist thinking. This was preceded by improvements in public health in the years following the second world war – notably the introduction of penicillin, mass vaccination and better malaria control.

There are counter-examples, too. It is not clear whether the opening up of European society during the 18th century was preceded by any major improvements in people’s health, for example. Nor is there yet any clear evidence linking the current pro-democracy movements in the Middle East and north Africa to changes in disease prevalence. The theory also predicts that episodes such as the recent worldwide swine-flu epidemic should cause a shift away from democracy and towards authoritarian, collectivist attitudes. Yet as Holly Arrow, a psychologist at the University of Oregon in Eugene, points out, no effect has been recorded.

Mysterious mechanisms

To make the theory stick, Thornhill and his collaborators will also need to provide a mechanism for their proposed link between pathogens and politics. If cultural changes are responsible, young people might learn to avoid disease – and outsiders – from the behaviour of those around them. Alternatively, the reaction could be genetically hard-wired. So far, it has not proved possible to eliminate any of the possible mechanisms. “It’s an enormous set of unanswered questions. I expect it will take many years to explore,” Schaller says.

One possible genetic explanation involves 5-HTTLPR, a gene that regulates levels of the neurotransmitter serotonin. People carrying the short form of the gene are more likely to be anxious and to be fearful of health risks, relative to those with the long version. These behaviours could be a life-saver if they help people avoid situations that would put them at risk of infection, so it might be expected that the short version of the gene is favoured in parts of the world where disease risk is high. People with the longer version of 5-HTTLPR, on the other hand, tend to have higher levels of serotonin and are therefore more extrovert and more prone to risk-taking. This could bring advantages such as an increased capacity to innovate, so the long form of the gene should be more
common in regions relatively free from illness.

That pattern is exactly what neuroscientists Joan Chiao and Katherine Blizinsky at Northwestern University in Evanston, Illinois, have reported in a paper published last year. Significantly, nations where the short version of the gene is more common also tend to have more collectivist attitudes (Proceedings of the Royal Society B, vol 277, p 529).

It is only tentative evidence, and some doubt that Chiao and Blizinsky’s findings are robust enough to support their conclusions (Proceedings of the Royal Society B, vol 278, p 329). But if the result pans out with further research, it suggests the behaviours involved in the parasite-stress model may be deeply ingrained in our genetic make-up, providing a hurdle to more rapid political change in certain areas. While no one is saying that groups with a higher proportion of short versions of the gene will never develop a democracy, the possibility that some societies are more genetically predisposed to it than others is nevertheless an uncomfortable idea to contemplate.

Should the biases turn out to be more temporary – if flexible psychological reactions to threat, or cultural learning, are the more important mechanisms – the debate might turn to potential implications of the theory. Projects aiming to improve medical care in poor countries might also lead a move to more democratic and open governments, for example, giving western governments another incentive to fund these schemes. “The way to develop a region is to emancipate it from parasites,” says Thornhill.

Remarks like that seem certain to attract flak. Curtis, for instance, bristled a little when New Scientist put the idea to her, pointing out that the immediate threat to human life is a pressing enough reason to be concerned about infectious disease.

Thornhill still has a huge amount of work ahead of him if he is to provide a convincing case that will assuage all of these doubts. In the meantime, his experience following publication of A Natural History of Rape has left him prepared for a hostile reception. “I had threats by email and phone,” he recalls. “You’re sometimes going to hurt people’s feelings. I consider it all in a day’s work.”

Jim Giles is a New Scientist correspondent based in San Francisco

Man’s best friends: How animals made us human (New Scientist)

31 May 2011 by Pat Shipman
Magazine issue 2814.

Video: How animals made us human

Our bond with animals goes far deeper than food and companionship: it drove our ancestors to develop tools and language

TRAVEL almost anywhere in the world and you will see something so common that it may not even catch your attention. Wherever there are people, there are animals: animals being walked, herded, fed, watered, bathed, brushed or cuddled. Many, such as dogs, cats and sheep, are domesticated but you will also find people living alongside wild and exotic creatures such as monkeys, wolves and binturongs. Close contact with animals is not confined to one particular culture, geographic region or ethnic group. It is a universal human trait, which suggests that our desire to be with animals is deeply embedded and very ancient.

On the face of it this makes little sense. In the wild, no other mammal adopts individuals from another species; badgers do not tend hares, deer do not nurture baby squirrels, lions do not care for giraffes. And there is a good reason why. Since the ultimate prize in evolution is perpetuating your genes in your offspring and their offspring, caring for an individual from another species is counterproductive and detrimental to your success. Every mouthful of food you give it, every bit of energy you expend keeping it warm (or cool) and safe, is food and energy that does not go to your own kin. Even if pets offer unconditional love, friendship, physical affection and joy, that cannot explain why or how our bond with other species arose in the first place. Who would bring a ferocious predator such a wolf into their home in the hope that thousands of years later it would become a loving family pet?

I am fascinated by this puzzle and as a palaeoanthropologist have tried to understand it by looking to the deep past for the origins of our intimate link with animals. What I found was a long trail, an evolutionary trajectory that I call the animal connection. What’s more, this trail links to three of the most important developments in human evolution: tool-making, language and domestication. If I am correct, our affinity with other species is no mere curiosity. Instead, the animal connection is a hugely significant force that has shaped us and been instrumental in our global spread and success in the world.

The trail begins at least 2.6 million years ago. That is when the first flaked stone tools appear in the archaeological record, at Gona in the Afar region of Ethiopia (Nature, vol 385, p 333). Inventing stone tools is no trivial task. It requires the major intellectual breakthrough of understanding that the apparent properties of an object can be altered. But the prize was great. Those earliest flakes are found in conjunction with fossilised animal bones, some of which bear cut marks. It would appear that from the start our ancestors were using tools to gain access to animal carcasses. Up until then, they had been largely vegetarian, upright apes. Now, instead of evolving the features that make carnivores effective hunters – such as swift locomotion, grasping claws, sharp teeth, great bodily strength and improved senses for hunting – our ancestors created their own adaptation by learning how to turn heavy, blunt stones into small, sharp items equivalent to razor blades and knives. In other words, early humans devised an evolutionary shortcut to becoming a predator.

That had many consequences. On the plus side, eating more highly nutritious meat and fat was a prerequisite to the increase in relative brain size that marks the human lineage. Since meat tends to come in larger packages than leaves, fruits or roots, meat-eaters can spend less time finding and eating food and more on activities such as learning, social interaction, observation of others and inventing more tools. On the minus side, though, preying on animals put our ancestors into direct competition with the other predators that shared their ecosystem. To get the upper hand, they needed more than just tools and that, I believe, is where the animal connection comes in.

Two and a half million years ago, there were 11 true carnivores in Africa. These were the ancestors of today’s lions, cheetahs, leopards and three types of hyena, together with five now extinct species: a long-legged hyena, a wolf-like canid, two sabretooth cats and a “false” sabretooth cat. All but three of these outweighed early humans, so hanging around dead animals would have been a very risky business. The new predator on the savannah would have encountered ferocious competition for prizes such as freshly killed antelope. Still, by 1.7 million years ago, two carnivore species were extinct – perhaps because of the intense competition – and our ancestor had increased enough in size that it outweighed all but four of the remaining carnivores.

Why did our lineage survive when true carnivores were going extinct? Working in social groups certainly helped, but hyenas and lions do the same. Having tools enabled early humans to remove a piece of a dead carcass quickly and take it to safety, too. But I suspect that, above all, the behavioural adaptation that made it possible for our ancestors to compete successfully with true carnivores was the ability to pay very close attention to the habits of both potential prey and potential competitors. Knowledge was power, so we acquired a deep understanding of the minds of other animals.

Out of Africa

Another significant consequence of becoming more predatory was a pressing need to live at lower densities. Prey species are common and often live in large herds. Predators are not, and do not, because they require large territories in which to hunt or they soon exhaust their food supply. The record of the geographic distribution of our ancestors provides more support for my idea that the animal connection has shaped our evolution. From the first appearance of our lineage 6 or 7 million years ago until perhaps 2 million years ago, all hominins were in Africa and nowhere else. Then early humans underwent a dramatic territorial expansion, forced by the demands of their new way of living. They spread out of Africa into Eurasia with remarkable speed, arriving as far east as Indonesia and probably China by about 1.8 million years ago. This was no intentional migration but simply a gradual expansion into new hunting grounds. First, an insight into the minds of other species had secured our success as predators, now that success had driven our expansion across Eurasia.

Throughout the period of these enormous changes in the lifestyle and ecology of our ancestors, gathering, recording and sharing knowledge became more and more advantageous. And the most crucial topic about which our ancestors amassed and exchanged information was animals.

How do I know this? No words or language remain from that time, so I cannot look for them. I can, however, look for symbols – since words are essentially symbolic – and that takes me to the wealth of prehistoric art that appears in Europe, Asia, Africa and Australia, starting about 50,000 years ago. Prehistoric art allows us to eavesdrop on the conversations of our ancestors and see the topic of discussion: animals, their colours, shapes, habits, postures, locomotion and social habits. This focus is even more striking when you consider what else might have been depicted. Pictures of people, social interactions and ceremonies are rare. Plants, water sources and geographic features are even scarcer, though they must have been key to survival. There are no images showing how to build shelters, make fires or create tools. Animal information mattered more than all of these.

The overwhelming predominance of animals in prehistoric art suggests that the animal connection – the evolutionary advantages of observing animals and collecting, compiling and sharing information about them – was a strong impetus to a second important development in human evolution: the development of language and enhanced communication. Of course, more was involved than simply coining words. Famously, vervet monkeys have different cries for eagles, leopards and snakes, but they cannot discuss dangerous-things-that-were-here-yesterday or ask “what ate my sibling?” or wonder if that danger might appear again tomorrow. They communicate with each other and share information, but they do not have language. The magical property of full language is that it is comprised of vocabulary and grammatical rules that can be combined and recombined in an infinite number of ways to convey fine shades of meaning.

Nobody doubts that language proved a major adaptive advantage to our ancestors in developing complex behaviours and sharing information. How it arose, however, remains a mystery. I believe I am the first to propose a continuity between the strong human-animal link that appeared 2.6 million years ago and the origin of language. The complexity and importance of animal-related information spurred early humans to move beyond what their primate cousins could achieve.

As our ancestors became ever more intimately involved with animals, the third and final product of the animal connection appeared. Domestication has long been linked with farming and the keeping of stock animals, an economic and social change from hunting and gathering that is often called the Neolithic revolution. Domestic animals are usually considered as commodities, “walking larders”, reflecting the idea that the basis of the Neolithic revolution was a drive for greater food security.

When I looked at the origins of domestication for clues to its underlying reasons, I found some fundamental flaws in this idea. Instead, my analysis suggests that domestication emerged as a natural progression of our close association with, and understanding of, other species. In other words, it was a product of the animal connection.

Man’s best friend

First, if domestication was about knowing where your next meal was coming from, then the first domesticate ought to have been a food source. It was not. According to a detailed analysis of fossil skulls carried out by Mietje Germonpré of the Royal Belgian Institute of Natural Sciences in Brussels and her colleagues, the earliest known dog skull is 32,000 years old (Journal of Archaeological Science, vol 36, p 473). The results have been greeted with some surprise, since other analyses have suggested dogs were domesticated around 17,000 years ago, but even that means they pre-date any other domesticated animal or plant by about 5000 years (see diagram). Yet dogs are not a good choice if you want a food animal: they are dangerous while being domesticated, being derived from wolves, and worst of all, they eat meat. If the objective of domestication was to have meat to eat, you would never select an animal that eats 2 kilograms of the stuff a day.

A sustainable relationship

My second objection to the idea that animals were domesticated simply for food turns on a paradox. Farming requires hungry people to set aside edible animals or seeds so as to have some to reproduce the following year. My Penn State colleague David Webster explores the idea in a paper due to appear in Current Anthropology. He concludes that it only becomes logical not to eat all you have if the species in question is already well on the way to being domesticated, because only then are you sufficiently familiar with it to know how to benefit from taking the long view. This means for an animal species to become a walking larder, our ancestors must have already spent generations living intimately with it, exerting some degree of control over breeding. Who plans that far in advance for dinner?

Then there’s the clincher. A domestic animal that is slaughtered for food yields little more meat than a wild one that has been hunted, yet requires more management and care. Such a system is not an improvement in food security. Instead, I believe domestication arose for a different reason, one that offsets the costs of husbandry. All domestic animals, and even semi-domesticated ones, offer a wealth of renewable resources that provide ongoing benefits as long as they are alive. They can provide power for hauling, transport and ploughing, wool or fur for warmth and weaving, milk for food, manure for fertiliser, fuel and building material, hunting assistance, protection for the family or home, and a disposal service for refuse and ordure. Domestic animals are also a mobile source of wealth, which can literally propagate itself.

Domestication, more than ever, drew upon our understanding of animals to keep them alive and well. It must have started accidentally and been a protracted reciprocal process of increasing communication that allowed us not just to tame other species but also to permanently change their genomes by selective breeding to enhance or diminish certain traits.

The great benefit for people of this caring relationship was a continuous supply of resources that enabled them to move into previously uninhabitable parts of the world. This next milestone in human evolution would have been impossible without the sort of close observation, accumulated knowledge and improved communication skills that the animal connection started selecting for when our ancestors began hunting at least 2.6 million years ago.

What does it matter if the animal connection is a fundamental and ancient influence on our species? I think it matters a great deal. The human-animal link offers a causal connection that makes sense of three of the most important leaps in our development: the invention of stone tools, the origin of language and the domestication of animals. That makes it a sort of grand unifying theory of human evolution.

And the link is as crucial today as it ever was. The fundamental importance of our relationship with animals explains why interacting with them offers various physical and mental health benefits – and why the annual expenditure on items related to pets and wild animals is so enormous.

Finally, if being with animals has been so instrumental in making humans human, we had best pay attention to this point as we plan for the future. If our species was born of a world rich with animals, can we continue to flourish in one where we have decimated biodiversity?

Pat Shipman is adjunct professor of biological anthropology at Penn State University. Her book The Animal Connection: A new perspective on what makes us human is published by W. W. Norton & Company on 13 June