When Whales and Humans Talk (Hakai Magazine)

Arctic people have been communicating with cetaceans for centuries—and scientists are finally taking note.

Tattooed Whale, 2016 by Tim Pitsiulak. Screen-print on Arches Cover Black. Reproduced with the permission of Dorset Fine ArtsApril 3rd, 2018

Harry Brower Sr. was lying in a hospital bed in Anchorage, Alaska, close to death, when he was visited by a baby whale.

Although Brower’s body remained in Anchorage, the young bowhead took him more than 1,000 kilometers north to Barrow (now Utqiaġvik), where Brower’s family lived. They traveled together through the town and past the indistinct edge where the tundra gives way to the Arctic Ocean. There, in the ice-blue underwater world, Brower saw Iñupiat hunters in a sealskin boat closing in on the calf’s mother.

Brower felt the shuddering harpoon enter the whale’s body. He looked at the faces of the men in the umiak, including those of his own sons. When he awoke in his hospital bed as if from a trance, he knew precisely which man had made the kill, how the whale had died, and whose ice cellar the meat was stored in. He turned out to be right on all three counts.

Brower lived six years after the episode, dying in 1992 at the age of 67. In his final years, he discussed what he had witnessed with Christian ministers and Utqiaġvik’s whaling captains. The conversations ultimately led him to hand down new rules to govern hunting female whales with offspring, meant to communicate respect to whales and signal that people were aware of their feelings and needs. “[The whale] talked to me,” Brower recalls in a collection of his stories, The Whales, They Give Themselves. “He told me all the stories about where they had all this trouble out there on the ice.”

Not long ago, non-Indigenous scientists might have dismissed Brower’s experience as a dream or the inchoate ramblings of a sick man. But he and other Iñupiat are part of a deep history of Arctic and subarctic peoples who believe humans and whales can talk and share a reciprocal relationship that goes far beyond that of predator and prey. Today, as Western scientists try to better understand Indigenous peoples’ relationships with animals—as well as animals’ own capacity for thoughts and feelings—such beliefs are gaining wider recognition, giving archaeologists a better understanding of ancient northern cultures.

“If you start looking at the relationship between humans and animals from the perspective that Indigenous people themselves may have had, it reveals a rich new universe,” says Matthew Betts, an archaeologist with the Canadian Museum of History who studies Paleo-Eskimo cultures in the Canadian Arctic. “What a beautiful way to view the world.”

It’s not clear exactly when people developed the technology that allowed them to begin hunting whales, but scholars generally believe Arctic whaling developed off the coast of Alaska sometime between 600 and 800 CE. For thousands of years before then, Arctic people survived by hunting seals, caribou, and walruses at the edge of the sea ice.

One such group, the Dorset—known in Inuit oral tradition as the Tunitwere rumored to have been so strong the men could outrun caribou and drag a 1,700-kilogram walrus across the ice. The women were said to have fermented raw seal meat against the warmth of their skin, leaving it in their pants for days at a time. But despite their legendary survival skills, the Tunit died out 1,000 years ago.An Inuit hunter sits on a whale that’s been hauled to shore for butchering in Point Hope, Alaska, in 1900. Photo by Hulton Deutsch/Getty Images

An Inuit hunter sits on a whale that’s been hauled to shore for butchering in Point Hope, Alaska, in 1900. Photo by Hulton Deutsch/Getty Images

One theory for their mysterious disappearance is that they were outcompeted by people who had begun to move east into the Canadian Arctic—migrants from Alaska who brought sealskin boats allowing them to push off from shore and hunt whales. Each spring, bowhead whales weighing up to 54,000 kilograms pass through the leads of water that open into the sea ice, and with skill and luck, the ancestors of today’s Inuit and Iñupiat people could spear a cetacean as it surfaced to breathe.

The advent of whaling changed the North. For the first time, hunters could bring in enough meat to feed an entire village. Permanent settlements began springing up in places like Utqiaġvik that were reliably visited by bowheads—places still inhabited today. Social organizations shifted as successful whale hunters amassed wealth, became captains, and positioned themselves at the top of a developing social hierarchy. Before long, the whale hunt became the center of cultural, spiritual, and day-to-day life, and whales the cornerstone of many Arctic and subarctic cosmologies.

When agricultural Europeans began visiting and writing about the North in the 10th century, they were mesmerized by Aboriginal peoples’ relationships with whales. Medieval literature depicted the Arctic as a land of malevolent “monstrous fishes” and people who could summon them to shore through magical powers and mumbled spells. Even as explorers and missionaries brought back straightforward accounts of how individual whaling cultures went about hunting, butchering, and sharing a whale, it was hard to shake the sense of mysticism. In 1938, American anthropologist Margaret Lantis analyzed these scattered ethnographic accounts and concluded that Iñupiat, Inuit, and other northern peoples belonged to a circumpolar “whale cult.”

Lantis found evidence of this in widespread taboos and rituals meant to cement the relationship between people and whales. In many places, a recently killed whale was given a drink of fresh water, a meal, and even traveling bags to ensure a safe journey back to its spiritual home. Individual whalers had their own songs to call the whales to them. Sometimes shamans performed religious ceremonies inside circles made of whale bones. Stashes of whaling amulets—an ambiguous word used to describe everything from carved, jewelry-like charms to feathers or skulls—were passed from father to son in whaling families.

To non-Indigenous observers, it was all so mysterious. So unknowable. And for archaeologists and biologists especially, it was at odds with Western scientific values, which prohibited anything that smacked of anthropomorphism.
A whaler waits for the bowhead whales from shore in Utqiaġvik, Alaska, during whaling season in the Chukchi Sea. Photo by Steven J. Kazlowski/Alamy Stock Photo

A whaler waits for the bowhead whales from shore in Utqiaġvik, Alaska, during whaling season in the Chukchi Sea. Photo by Steven J. Kazlowski/Alamy Stock Photo

In archaeology, such attitudes have limited our understanding of Arctic prehistory, says Erica Hill, a zooarchaeologist with the University of Alaska Southeast. Whaling amulets and bone circles were written off as ritualistic or supernatural with little exploration of what they actually meant to the people who created them. Instead, archaeologists who studied animal artifacts often focused on the tangible information they revealed about what ancient people ate, how many calories they consumed, and how they survived.

Hill is part of a burgeoning branch of archaeology that uses ethnographic accounts and oral histories to re-examine animal artifacts with fresh eyes—and interpret the past in new, non-Western ways. “I’m interested in this as part of our prehistory as humans,” Hill says, “but also in what it tells us about alternative ways of being.”

The idea that Indigenous people have spiritual relationships with animals is so well established in popular culture it’s cliché. Yet constricted by Western science and culture, few archaeologists have examined the record of human history with the perspective that animals feel emotions and can express those emotions to humans.

Hill’s interest in doing so was piqued in 2007, when she was excavating in Chukotka, Russia, just across the Bering Strait from Alaska. The site was estimated to be 1,000 to 2,000 years old, predating the dawn of whaling in the region, and was situated at the top of a large hill. As her team dug through the tundra, they uncovered six or seven intact walrus skulls deliberately arranged in a circle.

Like many archaeologists, Hill had been taught that ancient humans in harsh northern climates conserved calories and rarely expended energy doing things with no direct physical benefit. That people were hauling walrus skulls to a hilltop where there were plenty of similar-sized rocks for building seemed strange. “If you’ve ever picked up a walrus skull, they’re really, really heavy,” Hill says. So she started wondering: did the skulls serve a purpose that wasn’t strictly practical that justified the effort of carrying them uphill?

When Hill returned home, she began looking for other cases of “people doing funky stuff” with animal remains. There was no shortage of examples: shrines packed with sheep skulls, ceremonial burials of wolves and dogs, walrus-skull rings on both sides of the Bering Strait. To Hill, though, some of the most compelling artifacts came from whaling cultures.

Museum collections across North America, for instance, include a dazzling array of objects categorized as whaling amulets. From this grab bag, Hill identified 20 carved wooden objects. Many served as the seats of whaling boats. In the Iñupiaq language, they’re called either iktuġat or aqutim aksivautana, depending on dialect.

One in particular stands out. Hill was looking for Alaskan artifacts in a massive climate-controlled warehouse belonging to Smithsonian’s National Museum of Natural History in Washington, DC. The artifacts were housed in hundreds of floor-to-ceiling drawers, row after row of them, with little indication of what was inside. She pulled open one drawer and there it was—the perfect likeness of a bowhead whale staring back at her.

The object, likely from the late 19th century, probably functioned as a crosspiece. It was hewn from a hunk of driftwood into a crescent shape 21 centimeters long. Carved on one side was a bowhead, looking as it would look if you were gazing down on a whale from above, perhaps from a raven’s-eye perspective. A precious bead of obsidian was embedded in the blowhole. “It’s so elegant and simple but so completely whale,” Hill says. “It’s this perfect balance of minimalism and form.”

Sometime in the late 19th century, an Iñupiat carver fashioned this seat for an umiak out of driftwood, carving the likeness of a bowhead whale, its blowhole symbolized with a piece of obsidian. Photo by Department of Anthropology, Smithsonian Institute (Cat. A347918)Sometime in the late 19th century, an Iñupiaq carver fashioned this amulet for an umiak out of driftwood, carving the likeness of a bowhead whale, its blowhole symbolized with a piece of obsidian. As with other whaling amulets Erica Hill has examined, this object may have also functioned as part of the boat’s structure. Photo by Department of Anthropology, Smithsonian Institute (Cat. A347918)

Using Iñupiat oral histories and ethnographies recorded in the 19th and 20th centuries, Hill now knows that such amulets were meant to be placed in a boat with the likeness of the whale facing down, toward the ocean. The meticulously rendered art was thus meant not for humans, but for whales—to flatter them, Hill says, and call them to the hunters. “The idea is that the whale will be attracted to its own likeness, so obviously you want to depict the whale in the most positive way possible,” she explains.

Yupik stories from St. Lawrence Island tell of whales who might spend an hour swimming directly under an umiak, positioning themselves so they could check out the carvings and the men occupying the boat. If the umiak was clean, the carvings beautiful, and the men respectful, the whale might reposition itself to be harpooned. If the art portrayed the whale in an unflattering light or the boat was dirty, it indicated that the hunters were lazy and wouldn’t treat the whale’s body properly. Then the whale might swim away.

In “Sounding a Sea-Change: Acoustic Ecology and Arctic Ocean Governance” published in Thinking with Water, Shirley Roburn quotes Point Hope, Alaska, resident Kirk Oviok: “Like my aunt said, the whales have ears and are more like people,” he says. “The first batch of whales seen would show up to check which ones in the whaling crew would be more hospitable. … Then the whales would come back to their pack and tell them about the situation.”

The belief that whales have agency and can communicate their needs to people isn’t unique to the Arctic. Farther south, on Washington’s Olympic Peninsula and British Columbia’s Vancouver Island, Makah and Nuu-chah-nulth whalers observed eight months of rituals meant to communicate respect in the mysterious language of whales. They bathed in special pools, prayed, spoke quietly, and avoided startling movements that might offend whales. Right before the hunt, the whalers sang a song asking the whale to give itself.

In Makah and Nuu-chah-nulth belief, as in many Arctic cultures, whales weren’t just taken—they willingly gave themselves to human communities. A whale that offered its body wasn’t sentencing itself to death. It was choosing to be killed by hunters who had demonstrated, through good behavior and careful adherence to rituals, that they would treat its remains in a way that would allow it to be reborn. Yupik tradition, for example, holds that beluga whales once lived on land and long to return to terra firma. In exchange for offering itself to a Yupik community, a beluga expected to have its bones given the ritualistic treatment that would allow it to complete this transition and return to land, perhaps as one of the wolves that would gnaw on the whale’s bones.

According to Hill, many of the objects aiding this reciprocity—vessels used to offer whales a drink of fresh water, amulets that hunters used to negotiate relationships with animal spirits—weren’t just reserved for shamanistic ceremonies. They were part of everyday life; the physical manifestation of an ongoing, daily dialogue between the human and animal worlds.

While Westerners domesticated and eventually industrialized the animals we eat—and thus came to view them as dumb and inferior—Arctic cultures saw whale hunting as a match between equals. Bipedal humans with rudimentary technology faced off against animals as much as 1,000 times their size that were emotional, thoughtful, and influenced by the same social expectations that governed human communities. In fact, whales were thought to live in an underwater society paralleling that above the sea.

a bowhead whale swimming amid multi-layer sea ice

It’s difficult to assess populations of animals that swim under the ice, far from view, like bowhead whales. But experienced Iñupiat whalers are good at it. Photo by Steven Kazlowski/Minden Pictures

Throughout history, similar beliefs have guided other human-animal relationships, especially in hunter-gatherer cultures that shared their environment with big, potentially dangerous animals. Carvings left behind by the Tunit, for example, suggest a belief that polar bears possessed a kind of personhood allowing them to communicate with humans; while some Inuit believed walruses could listen to humans talking about them and react accordingly.

Whether or not those beliefs are demonstrably true, says Hill, they “make room for animal intelligence and feelings and agency in ways that our traditional scientific thinking has not.”

Today, as archaeologists like Hill and Matthew Betts shift their interpretation of the past to better reflect Indigenous worldviews, biologists too are shedding new light on whale behavior and biology that seems to confirm the traits Indigenous people have attributed to whales for more than 1,000 years. Among them is Hal Whitehead, a professor at Dalhousie University in Nova Scotia who argues that cetaceans have their own culture—a word typically reserved for human societies.

By this definition, culture is social learning that’s passed down from one generation to the next. Whitehead finds evidence for his theory in numerous recent studies, including one that shows bowhead whales in the North Pacific, off the Alaskan coast, and in the Atlantic Ocean near Greenland sing different songs, the way human groups might have different styles of music or linguistic dialects. Similarly, pods of resident killer whales living in the waters off south Vancouver Island greet each other with different behaviors than killer whales living off north Vancouver Island, despite the fact that the groups are genetically almost identical and have overlapping territories.

Plus, calves spend years with their mothers, developing the strong mother-offspring bonds that serve to transfer cultural information, and bowhead whales live long enough to accumulate the kind of environmental knowledge that would be beneficial to pass on to younger generations. We know this largely because of a harpoon tip that was found embedded in a bowhead in northern Alaska in 2007. This particular harpoon was only manufactured between 1879 and 1885 and wasn’t used for long after, meaning that the whale had sustained its injury at least 117 years before it finally died.

Other beliefs, too, are proving less farfetched than they once sounded. For years, scientists believed whales couldn’t smell, despite the fact that Iñupiat hunters claimed the smell of woodsmoke would drive a whale away from their camp. Eventually, a Dutch scientist dissecting whale skulls proved the animals did, indeed, have the capacity to smell. Even the Yupik belief that beluga whales were once land-dwelling creatures is rooted in reality: some 50 million years ago, the ancestor of modern-day whales walked on land. As if recalling this, whale fetuses briefly develop legs before losing them again.

An Inuit hunter sits on a whale that’s been hauled to shore for butchering in Point Hope, Alaska, in 1900. Photo by Hulton Deutsch/Getty ImagesInuit hunters in Utqiaġvik, Alaska, paddle an umiak after a bowhead whale. Photo by Galen Rowell/Getty Images

None of this suggests that whales freely give themselves to humans. But once you understand the biological and intellectual capabilities of whales—as whaling cultures surely did—it’s less of a leap to conclude that cetaceans live in their own underwater society, and can communicate their needs and wishes to humans willing to listen.

With the dawn of the 20th century and the encroachment of Euro-Americans into the North, Indigenous whaling changed drastically. Whaling in the Makah and Nuu-chah-nulth Nations essentially ended in the 1920s after commercial whalers hunted the gray whale to near extinction. In Chukotka, Russian authorities in the 1950s replaced community-based whaling with state-run whaling.

Even the whaling strongholds of Alaska’s Iñupiat villages weren’t immune. In the 1970s, the International Whaling Commission ordered a halt to subsistence bowhead whaling because US government scientists feared there were just 1,300 of the animals left. Harry Brower Sr. and other whaling captains who’d amassed lifetimes of knowledge knew that figure was wrong.

But unlike other whaling cultures, Iñupiat whalers had the means to fight back, thanks to taxes they had collected from a nearby oil boom. With the money, communities hired Western-trained scientists to corroborate traditional knowledge. The scientists developed a new methodology that used hydrophones to count bowhead whales beneath the ice, rather than extrapolating the population based on a count of the visible bowheads passing by a single, ice-free locale. Their findings proved bowheads were far more numerous than the government had previously thought, and subsistence whaling was allowed to continue.

Elsewhere, too, whaling traditions have slowly come back to life. In 1999, the Makah harvested their first whale in over 70 years. The Chukchi were allowed to hunt again in the 1990s.

Yet few modern men knew whales as intimately as Brower. Although he eschewed some traditions—he said he never wanted his own whaling song to call a harpooned whale to the umiak, for exampleBrower had other ways of communicating with whales. He believed that whales listened, and that if a whaler was selfish or disrespectful, whales would avoid him. He believed that the natural world was alive with animals’ spirits, and that the inexplicable connection he’d felt with whales could only be explained by the presence of such spirits.

And he believed that in 1986, a baby whale visited him in an Anchorage hospital to show him how future generations could maintain the centuries-long relationship between humans and whales. Before he died, he told his biographer Karen Brewster that although he believed in a Christian heaven, he personally thought he would go elsewhere. “I’m going to go join the whales,” he said. “That’s the best place, I think. … You could feed all the people for the last time.”

Perhaps Brower did become a whale and feed his people one last time. Or perhaps, through his deep understanding of whale biology and behavior, he passed down the knowledge that enabled his people to feed themselves for generations to come. Today, the spring whaling deadline he proposed based on his conversation with the baby whale is still largely observed, and bowhead whales continue to sustain Iñupiat communities, both physically and culturally.

Correction: This article has been updated to clarify the original purpose of the whaling amulet that caught Erica Hill’s attention in the Smithsonian warehouse.

Author bio Krista Lee Langlois is an independent journalist, essayist, and “aquaphile.” She lived in the Marshall Islands in 2006 and now writes about the intersection of people and nature from a landlocked cabin outside Durango, Colorado.


Why nutritional psychiatry is the future of mental health treatment (The Conversation)

A lack of essential nutrients is known to contribute to the onset of poor mental health in people suffering from anxiety and depression, bipolar disorder, schizophrenia and ADHD. Nutritional psychiatry is a growing discipline that focuses on the use of food and supplements to provide these essential nutrients as part of an integrated or alternative treatment for mental health disorders.

But nutritional approaches for these debilitating conditions are not widely accepted by mainstream medicine. Treatment options tend to be limited to official National Institute for Care Excellence (NICE) guidelines which recommend talking therapies and antidepressants.

Use of antidepressants

Antidepressant use has more than doubled in recent years. In England 64.7m prescriptions were issued for antidepressants in 2016 at a cost of £266.6m. This is an increase of 3.7m on the number of items prescribed in 2015 and more than double than the 31m issued in 2006.

A recent Oxford University study found that antidepressants were more effective in treating depression than placebo. The study was led by Dr Andrea Cipriani who claimed that depression is under treated. Cipriani maintains that antidepressants are effective and a further 1m prescriptions should be issued to people in the UK.

This approach suggests that poor mental health caused by social conditions is viewed as easily treated by simply dispensing drugs. But antidepressants are shunned by people whom they could help because of the social stigma associated with mental ill-health which leads to discrimination and exclusion.

Prescriptions for 64.7m items of antidepressants were dispensed in England in 2016, the highest level recorded by the NHS. Shutterstock

More worrying is the increase in the use of antidepressants by children and young people. In Scotland, 5,572 children under 18 were prescribed antidepressants for anxiety and depression in 2016. This figure has more than doubled since 2009/2010.

But according to British psychopharmacologist Professor David Healy, 29 clinical trials of antidepressant use in young people found no benefits at all. These trials revealed that instead of relieving symptoms of anxiety and depression, antidepressants caused children and young people to feel suicidal.

Healy also challenges their safety and effectiveness in adults. He believes that antidepressants are over-prescribed and that there is little evidence that they are safe for long-term use. Antidepressants are said to create dependency, have unpleasant side effects and cannot be relied upon to always relieve symptoms.

Nutrition and poor mental health

In developed countries such as the UK people eat a greater variety of foodstuffs than ever before – but it doesn’t follow that they are well nourished. In fact, many people do not eat enough nutrients that are essential for good brain health, opting for a diet of heavily processed food containing artificial additives and sugar.

The link between poor mental health and nutritional deficiencies has long been recognised by nutritionists working in the complementary health sector. However, psychiatrists are only now becoming increasingly aware of the benefits of using nutritional approaches to mental health, calling for their peers to support and research this new field of treatment.

It is now known that many mental health conditions are caused by inflammation in the brain which ultimately causes our brain cells to die. This inflammatory response starts in our gut and is associated with a lack of nutrients from our food such as magnesium, omega-3 fatty acids, probiotics, vitamins and minerals that are all essential for the optimum functioning of our bodies.

Recent research has shown that food supplements such as zinc, magnesium, omega 3, and vitamins B and D3 can help improve people’s mood, relieve anxiety and depression and improve the mental capacity of people with Alzheimer’s.

Magnesium is one of most important minerals for optimal health, yet many people are lacking in it. One studyfound that a daily magnesium citrate supplement led to a significant improvement in depression and anxiety, regardless of age, gender or severity of depression. Improvement did not continue when the supplement was stopped.

Omega-3 fatty acids are another nutrient that is critical for the development and function of the central nervous system – and a lack has been associated with low mood, cognitive decline and poor comprehension.

Research has shown that supplements like zinc, magnesium and vitamins B and D can improve the mental capacity of people with Alzheimer’s. Shutterstock

The role of probiotics – the beneficial live bacteria in your digestive system – in improving mental health has also been explored by psychiatrists and nutritionists, who found that taking them daily was associated with a significant reduction in depression and anxiety. Vitamin B complex and zinc are other supplements found to reduce the symptoms of anxiety and depression.

Hope for the future?

These over-the-counter” supplements are widely available in supermarkets, chemists and online health food stores, although the cost and quality may vary. For people who have not responded to prescription drugs or who cannot tolerate the side effects, nutritional intervention can offer hope for the future.

There is currently much debate over the effectiveness of antidepressants. The use of food supplements offer an alternative approach that has the potential to make a significant difference to the mental health of all age groups.

The emerging scientific evidence suggests that there should be a bigger role for nutritional psychiatry in mental health within conventional health services. If the burden of mental ill health is to be reduced, GPs and psychiatrists need to be aware of the connection between food, inflammation and mental illness.

Medical education has traditionally excluded nutritional knowledge and its association with disease. This has led to a situation where very few doctors in the UK have a proper understanding of the importance of nutrition. Nutritional interventions are thought to have little evidence to support their use to prevent or maintain well-being and so are left to dietitians, rather than doctors, to advise on.

But as the evidence mounts up, it is time for medical education to take nutrition seriously so that GPs and psychiatrists of the future know as much about its role in good health as they do about anatomy and physiology. The state of our mental health could depend on it.

Os motivos por trás da Guerra dos Chimpanzés, a única registrada entre animais (BBC Brasil)

9 abril 2018Três chimpanzés do Parque Nacional de Gombe nos anos 1970

GEZA TELEKI. A eleição de um macaco do norte do Parque Nacional de Gombe como macho alfa causou tensão na comunidade de chimpanzés e, principalmente, com dois rivais, Charlie e Hugh

A única guerra civil documentada entre chimpanzés selvagens começou com um assassinato brutal.

Era janeiro de 1974, e um chimpanzé chamado Godi fazia sua refeição, sozinho, nos galhos de uma árvore no Parque Nacional de Gombe, na Tanzânia.

Mas Godi não reparou que, enquanto comia, oito macacos o rodearam. “Ele pulou da árvore e correu, mas eles o agarraram”, disse o primatologista britânico Richard Wrangham ao documentário da BBC The Demonic Ape (O Macaco Demoníaco, em tradução livre).”Um deles conseguiu agarrar um de seus pés, outro lhe prendeu pela mão. Ele foi imobilizado e surrado. O ataque durou mais de cinco minutos e, quando o deixaram, ele mal conseguia se mover.

“Godi nunca mais foi visto.

O episódio é conhecido como o início do que a famosa primatologista britânica Jane Goodall chamou de “A Guerra dos 4 Anos”, o conflito que dividiu uma comunidade de chimpanzés em Gombe e desatou uma onda de assassinatos e violência que, desde então, nunca mais foi registrada.

Mão de um chimpanzé

GETTY IMAGES. O assassinato brutal do primata Godi marcou o início da sangrenta “Guerra de 4 anos” dos chimpanzés em Gombe

No entanto, o motivo exato e a causa da divisão são um “eterno mistério”, disse Joseph Feldblum, professor de antropologia evolutiva da Universidade de Duke, nos Estados Unidos, em um comunicado da instituição.

No mês passado, Feldblum liderou um estudo publicado na revista científica American Journal of Physical Anthropology que revela a história de “poder, ambição e ciúmes” que deu origem à guerra entre os primatas.


Macacos e humanos

Feldblum está há 25 anos arquivando e digitalizando as anotações que Goodall fez durante seus mais de 55 anos vivendo no Parque Nacional de Gombe.

A primatologista, que na última terça-feira completou 84 anos, mudou tudo o que acreditávamos saber sobre os chimpanzés (e sobre os seres humanos) ao descobrir que esses macacos fabricavam e usavam ferramentas, tinham uma linguagem primitiva e eram capazes de entender o que seus pares pensavam.

Mas Goodall também descobriu a crueldade que esses animais podiam demonstrar.

Jane Goodall com seu famoso boneco em 2018

GETTY IMAGES. A primatologista Jane Goodall, que lidera uma fundação de pesquisa e conservação com seu nome, acompanhou toda a guerra dos chimpanzés nos anos 1970

Foram quatro anos documentando saques, surras e assassinatos entre as facções Kasakela e Kahama, que ficavam ao norte e ao sul do parque, respectivamente.

Nesse tempo, por exemplo, um terço das mortes de chimpanzés machos em Gombe foram perpetreadas pelos próprios animais.

A guerra, disse Goodall no documentário da BBC, “só fez com que os chimpanzés se parecessem ainda mais conosco do que se pensava”.

A violência foi tão excessiva e única que alguns investigadores sugeriram que ela foi provocada involuntariamente pela própria Goodall, que montou uma estação de observação no local onde os animais recebiam alimentos.

De acordo com essas teorias, “as duas comunidades de chimpanzés poderiam ter existido o tempo todo ou estavam se dissolvendo quando Goodall começou sua pesquisa, e a estação de alimentação os reuniu em uma trégua temporária até que eles se separaram novamente”, disse o comunicado da Universidade de Duke.

“Mas os novos resultados de uma equipe de Duke e da Universidade Estadual do Arizona sugerem que alguma coisa a mais estava acontecendo.”

Chimpanzés brigando

GETTY IMAGES. Os chimpanzés são capazes de violência, mas pesquisadores dizem que o ocorrido entre 1974 e 1978 excedeu todos os registros de brutalidade


Amigos e inimigos

No novo estudo, os pesquisadores analisaram as mudanças nas alianças entre 19 chimpanzés machos durante os sete anos anteriores à guerra.

Para isso, elaboraram mapas detalhados das redes sociais dos primatas, nas quais os machos eram considerados amigos se fossem vistos chegando juntos à estação de alimentação com maior frequência.

“Sua análise sugere que, durante os primeiros anos, entre 1967 e 1970, os machos do grupo original estavam misturados”, disse Duke.

Foi aí que a comunidade começou a se dividir: enquanto alguns passavam mais tempo no norte, outros estavam a maior parte do tempo no sul.

Em 1972, a socialização entre os machos já ocorria exclusivamente dentro das facções Kasakela ou Kahama.

Silhueta de um chimpanzé

GETTY IMAGES. Ao ver chegar os macacos do sul, os do norte “subiam nas árvores, havia muitos gritos e demonstrações de poder”, diz um novo estudo sobre o episódio

Ao se encontrarem, eles começavam a atirar galhos uns nos outros, a gritar ou fazer outras demonstrações de força.

“Escutávamos gritos do sul e dizíamos: ‘Os machos do sul estão vindo!'”, relembra Anne Pusey, professora de antropologia evolutiva da Universidade de Duke que esteve em Gombe com Goodall e é coautora do estudo atual.

“Nessa hora, todos os machos do norte subiam nas árvores e ouvíamos muitos gritos e demonstrações de poder.”

Três suspeitos

A partir do momento que ocorreu a divisão entre os grupos, os pesquisadores acreditam que o conflito surgiu por causa de “uma luta pelo poder entre três machos de alta categoria”: Humphrey, um macho alfa recém-coroado pelo grupo do norte, e seus rivais do sul, Charlie e Hugh.

Chimpanzé sofrendo

GETTY IMAGES. Violência entre três machos líderes afetou toda a rede de vínculos sociais, sem distinguir idade nem sexo

“Humphrey era grande e se sabia que ele atirava pedras, o que era assustador. Ele conseguia intimidar Charlie e Hugh separadamente, mas, quando estavam juntos, ele se mantinha fora do caminho”, diz Pussey no comunicado da universidade.

Durante quatro anos, o grupo de Humphrey destruiu o grupo do sul, e diversos machos “rebeldes” morreram ou desapareceram. O maior dos grupos invadia sistemativamente o território alheio e, se encontrasse um chimpanzé rival, o atacava cruelmente e o deixava morrer em decorrência dos ferimentos.

De acordo com a pesquisa, a disponibilidade de fêmeas foi mais baixa do que o normal nesse período, o que provavelmente exacerbou a luta pelo domínio do território.

A violência, por sua vez, não se limitou a esses três machos rivais, mas afetou toda a rede de vínculos sociais dos primatas, sem distinguir idade nem sexo.

Os pesquisadores reconhecem que a falta de outros eventos semelhantes na natureza torna mais difícil comparar os novos resultados, mas o trabalho pode trazer certa paz a Goodall.

“A situação foi terrível”, disse a britânica, reconhecendo que sua estação de observação de fato pode ter “aumentado a violência” entre os primatas.

“Acho que a parte mais triste foi ter observado a sequência de eventos em que uma comunidade maior aniquilou por completo a menor e tomou seu território.”

How Genetics Is Changing Our Understanding of ‘Race’ (New York Times)

Credit: Angie Wang

In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis. A classic example often cited is the inconsistent definition of “black.” In the United States, historically, a person is “black” if he has any sub-Saharan African ancestry; in Brazil, a person is not “black” if he is known to have any European ancestry. If “black” refers to different people in different contexts, how can there be any genetic basis to it?

Beginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”

In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.

It is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view. 

But over the years this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.

The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations. The concern is that such research, no matter how well-intentioned, is located on a slippery slope that leads to the kinds of pseudoscientific arguments about biological difference that were used in the past to try to justify the slave trade, the eugenics movement and the Nazis’ murder of six million Jews.

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.

Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases. For example, we now know that genetic factors help explain why northern Europeans are taller on average than southern Europeans, why multiple sclerosis is more common in European-Americans than in African-Americans, and why the reverse is true for end-stage kidney disease.

I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science. I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.

This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such differences, instead of sticking our heads in the sand and being caught unprepared when they are found.

To get a sense of what modern genetic research into average biological differences across populations looks like, consider an example from my own work. Beginning around 2003, I began exploring whether the population mixture that has occurred in the last few hundred years in the Americas could be leveraged to find risk factors for prostate cancer, a disease that occurs 1.7 times more often in self-identified African-Americans than in self-identified European-Americans. This disparity had not been possible to explain based on dietary and environmental differences, suggesting that genetic factors might play a role.

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.

To understand why it is so dangerous for geneticists and anthropologists to simply repeat the old consensus about human population differences, consider what kinds of voices are filling the void that our silence is creating. Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.

One of Mr. Wade’s key sources, for example, is the anthropologist Henry Harpending, who has asserted that people of sub-Saharan African ancestry have no propensity to work when they don’t have to because, he claims, they did not go through the type of natural selection for hard work in the last thousands of years that some Eurasians did. There is simply no scientific evidence to support this statement. Indeed, as 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.

Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.

At a meeting a few years later, Dr. Watson said to me and my fellow geneticist Beth Shapiro something to the effect of “When are you guys going to figure out why it is that you Jews are so much smarter than everyone else?” He asserted that Jews were high achievers because of genetic advantages conferred by thousands of years of natural selection to be scholars, and that East Asian students tended to be conformist because of selection for conformity in ancient Chinese society. (Contacted recently, Dr. Watson denied having made these statements, maintaining that they do not represent his views; Dr. Shapiro said that her recollection matched mine.)

What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes. They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.

This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent. We leave a vacuum that gets filled by pseudoscience, an outcome that is far worse than anything we could achieve by talking openly.

If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong. For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences.

For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females. The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material — a Y chromosome that males have and that females don’t, and a second X chromosome that females have and males don’t.

Most everyone accepts that the biological differences between males and females are profound. In addition to anatomical differences, men and women exhibit average differences in size and physical strength. (There are also average differences in temperament and behavior, though there are important unresolved questions about the extent to which these differences are influenced by social expectations and upbringing.)

How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences.

It is clear from the inequities that persist between women and men in our society that fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward. And if this is the case with men and women, then it is surely the case with whatever differences we may find among human populations, the great majority of which will be far less profound.

An abiding challenge for our civilization is to treat each human being as an individual and to empower all people, regardless of what hand they are dealt from the deck of life. Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.

It is important to face whatever science will reveal without prejudging the outcome and with the confidence that we can be mature enough to handle any findings. Arguing that no substantial differences among human populations are possible will only invite the racist misuse of genetics that we wish to avoid.

David Reich is a professor of genetics at Harvard and the author of the forthcoming book “Who We Are and How We Got Here: Ancient DNA and the New Science of the Human Past,” from which this article is adapted.

For Decades, Our Coverage Was Racist. To Rise Above Our Past, We Must Acknowledge It (National Geographic)

We asked a preeminent historian to investigate our coverage of people of color in the U.S. and abroad. Here’s what he found.

In a full-issue article on Australia that ran in 1916, Aboriginal Australians were called “savages” who “rank lowest in intelligence of all human beings.” PHOTOGRAPHS BY C.P. SCOTT (MAN); H.E. GREGORY (WOMAN); NATIONAL GEOGRAPHIC CREATIVE (BOTH)

This story helps launch a series about racial, ethnic, and religious groups and their changing roles in 21st-century life. The series runs through 2018 and will include coverage of Muslims, Latinos, Asian Americans, and Native Americans.


 “Cards and clay pipes amuse guests in Fairfax House’s 18th-century parlor,” reads the caption in a 1956 article on Virginia history. Although slave labor built homes featured in the article, the writer contended that they “stand for a chapter of this country’s history every American is proud to remember.” PHOTOGRAPH BY ROBERT F. SISSON AND DONALD MCBAIN, NATIONAL GEOGRAPHIC CREATIVE (RIGHT)

I’m the tenth editor of National Geographic since its founding in 1888. I’m the first woman and the first Jewish person—a member of two groups that also once faced discrimination here. It hurts to share the appalling stories from the magazine’s past. But when we decided to devote our April magazine to the topic of race, we thought we should examine our own history before turning our reportorial gaze to others.

Race is not a biological construct, as writer Elizabeth Kolbert explains in this issue, but a social one that can have devastating effects. “So many of the horrors of the past few centuries can be traced to the idea that one race is inferior to another,” she writes. “Racial distinctions continue to shape our politics, our neighborhoods, and our sense of self.”

How we present race matters. I hear from readers that National Geographic provided their first look at the world. Our explorers, scientists, photographers, and writers have taken people to places they’d never even imagined; it’s a tradition that still drives our coverage and of which we’re rightly proud. And it means we have a duty, in every story, to present accurate and authentic depictions—a duty heightened when we cover fraught issues such as race.

Photographer Frank Schreider shows men from Timor island his camera in a 1962 issue. The magazine often ran photos of “uncivilized” native people seemingly fascinated by “civilized” Westerners’ technology. PHOTOGRAPH BY FRANK AND HELEN SCHREIDER, NATIONAL GEOGRAPHIC CREATIVE

We asked John Edwin Mason to help with this examination. Mason is well positioned for the task: He’s a University of Virginia professor specializing in the history of photography and the history of Africa, a frequent crossroads of our storytelling. He dived into our archives.

What Mason found in short was that until the 1970s National Geographicall but ignored people of color who lived in the United States, rarely acknowledging them beyond laborers or domestic workers. Meanwhile it pictured “natives” elsewhere as exotics, famously and frequently unclothed, happy hunters, noble savages—every type of cliché.

Unlike magazines such as Life, Mason said, National Geographic did little to push its readers beyond the stereotypes ingrained in white American culture.

editors-page-pacific-islanders.adapt.280.1National Geographic of the mid-20th century was known for its glamorous depictions of Pacific islanders. Tarita Teriipaia, from Bora-Bora, was pictured in July 1962—the same year she appeared opposite Marlon Brando in the movie Mutiny on the Bounty. PHOTOGRAPH BY LUIS MARDEN, NATIONAL GEOGRAPHIC CREATIVE (RIGHT)

“Americans got ideas about the world from Tarzan movies and crude racist caricatures,” he said. “Segregation was the way it was. National Geographic wasn’t teaching as much as reinforcing messages they already received and doing so in a magazine that had tremendous authority. National Geographic comes into existence at the height of colonialism, and the world was divided into the colonizers and the colonized. That was a color line, and National Geographic was reflecting that view of the world.”

Some of what you find in our archives leaves you speechless, like a 1916 story about Australia. Underneath photos of two Aboriginal people, the caption reads: “South Australian Blackfellows: These savages rank lowest in intelligence of all human beings.”

Questions arise not just from what’s in the magazine, but what isn’t. Mason compared two stories we did about South Africa, one in 1962, the other in 1977. The 1962 story was printed two and a half years after the massacre of 69 black South Africans by police in Sharpeville, many shot in the back as they fled. The brutality of the killings shocked the world.

An article reporting on apartheid South Africa in 1977 shows Winnie Mandela, a founder of the Black Parents’ Association and wife of Nelson. She was one of some 150 people the government prohibited from leaving their towns, speaking to the press, and talking to more than two people at a time. PHOTOGRAPH BY JAMES P. BLAIR, NATIONAL GEOGRAPHIC CREATIVE

National Geographic’s story barely mentions any problems,” Mason said. “There are no voices of black South Africans. That absence is as important as what is in there. The only black people are doing exotic dances … servants or workers. It’s bizarre, actually, to consider what the editors, writers, and photographers had to consciously not see.”

Contrast that with the piece in 1977, in the wake of the U.S. civil rights era: “It’s not a perfect article, but it acknowledges the oppression,” Mason said. “Black people are pictured. Opposition leaders are pictured. It’s a very different article.”

Fast-forward to a 2015 story about Haiti, when we gave cameras to young Haitians and asked them to document the reality of their world. “The images by Haitians are really, really important,” Mason said, and would have been “unthinkable” in our past. So would our coverage now of ethnic and religious conflicts, evolving gender norms, the realities of today’s Africa, and much more.

“I buy bread from her every day,” Haitian photographer Smith Neuvieme said of fellow islander Manuela Clermont. He made her the center of this image, published in 2015PHOTOGRAPH BY SMITH NEUVIEME, FOTOKONBIT

Mason also uncovered a string of oddities—photos of “the native person fascinated by Western technology. It really creates this us-and-them dichotomy between the civilized and the uncivilized.” And then there’s the excess of pictures of beautiful Pacific-island women.

“If I were talking to my students about the period until after the 1960s, I would say, ‘Be cautious about what you think you are learning here,’ ” he said. “At the same time, you acknowledge the strengths National Geographic had even in this period, to take people out into the world to see things we’ve never seen before. It’s possible to say that a magazine can open people’s eyes at the same time it closes them.”

April 4 marks the 50th anniversary of the assassination of Martin Luther King, Jr. It’s a worthy moment to step back, to take stock of where we are on race. It’s also a conversation that is changing in real time: In two years, for the first time in U.S. history, less than half the children in the nation will be white. So let’s talk about what’s working when it comes to race, and what isn’t. Let’s examine why we continue to segregate along racial lines and how we can build inclusive communities. Let’s confront today’s shameful use of racism as a political strategy and prove we are better than this.

For us this issue also provided an important opportunity to look at our own efforts to illuminate the human journey, a core part of our mission for 130 years. I want a future editor of National Geographic to look back at our coverage with pride—not only about the stories we decided to tell and how we told them but about the diverse group of writers, editors, and photographers behind the work.

We hope you will join us in this exploration of race, beginning this month and continuing throughout the year. Sometimes these stories, like parts of our own history, are not easy to read. But as Michele Norris writes in this issue, “It’s hard for an individual—or a country—to evolve past discomfort if the source of the anxiety is only discussed in hushed tones.”

Os africanos que propuseram ideias iluministas antes de Locke e Kant (Ilustríssima, FSP)

Ilustração de Fabio Zimbres


RESUMO Os ideais mais elevados de Locke, Hume e Kant foram propostos mais de um século antes deles por Zera Yacob, um etíope que viveu numa caverna. O ganês Anton Amo usou noção da filosofia alemã antes de ela ser registrada oficialmente. Autor defende que ambos tenham lugar de destaque em meio aos pensadores iluministas.


Os ideais do Iluminismo são a base de nossas democracias e universidades no século 21: a crença na razão, na ciência, no ceticismo, no secularismo e na igualdade. De fato, nenhuma outro período se compara à era do Iluminismo.

A Antiguidade é inspiradora, mas está a um mundo de distância das sociedades modernas. A Idade Média é mais razoável do que sua reputação sugere, mas ainda assim é medieval. A Renascença foi gloriosa, mas em grande medida graças ao seu resultado: o Iluminismo. O romantismo veio como reação à era da razão, mas os ideais dos Estados modernos não se expressam em termos de romantismo e emoção.

Segundo a história mais contada, o Iluminismo tem origem no “Discurso do Método” (1637), de René Descartes, continuou por cerca de um século e meio com John Locke, Isaac Newton, David Hume, Voltaire e Kant e terminou com a Revolução Francesa, em 1789 —talvez com o período do terror, em 1793.

Mas e se a história estiver errada? E se o Iluminismo puder ser associado a lugares e pensadores que costumamos ignorar? Tais perguntas me assombram desde que topei com o trabalho de um filósofo etíope do século 17: Zera Yacob (1599-1692), também grafado Zära Yaqob.

Yacob nasceu numa família pobre numa propriedade agrícola perto de Axum, a lendária antiga capital do norte da Etiópia. Como estudante, ele impressionou seus professores e foi enviado a uma nova escola para estudar retórica (“siwasiw” em ge’ez, a língua local), poesia e pensamento crítico (“qiné”) por quatro anos.

Em seguida, estudou a Bíblia por dez anos em outra escola, recebendo ensinamentos dos católicos e dos coptas, bem como da tradição cristã ortodoxa, majoritária no país.

Na década de 1620, um jesuíta português convenceu o rei Susenyos a converter-se ao catolicismo, que não tardou a virar religião oficial da Etiópia. Seguiu-se uma perseguição aos livres-pensadores, mais intensa a partir de 1630. Yacob, que nessa época lecionava na região de Axum, havia declarado que nenhuma religião tem mais razão que outra —e seus inimigos o denunciaram para o rei.

Yacob fugiu, levando apenas um pouco de ouro e os Salmos de Davi. Viajou para o sul, para a região de Shewa, onde se deparou com o rio Tekezé.

Ali encontrou uma área desabitada com uma “bela caverna” no início de um vale. Construiu um muro de pedra e viveu nesse local isolado para “encarar apenas os fatos essenciais da vida”, como Henry David Thoreau descreveria uma vida também solitária, dois séculos mais tarde, em “Walden” (1854).

Por dois anos, até a morte do rei, em setembro de 1632, Yacob permaneceu na caverna como ermitão, saindo apenas para buscar alimentos no mercado mais próximo. Na caverna, ele alinhavou sua nova filosofia racionalista.

Ele acreditava na primazia da razão e afirmava que todos os seres humanos, homens e mulheres, são criados iguais. Yacob argumentou contra a escravidão, criticou todas as religiões e doutrinas reconhecidas e combinou essas opiniões com sua crença pessoal em um criador divino, asseverando que a existência de uma ordem no mundo faz dessa a opção mais racional.

Em suma: muitos dos ideais mais elevados do Iluminismo foram concebidos e resumidos por um homem que trabalhou sozinho em uma caverna etíope de 1630 a 1632.


A filosofia de Yacob, baseada na razão, é apresentada em sua obra principal, “Hatäta” (investigação). O livro foi escrito em 1667 por insistência de seu discípulo, Walda Heywat, que escreveu ele próprio uma “Hatäta” de orientação mais prática.

Hoje, 350 anos mais tarde, é difícil encontrar um exemplar do trabalho de Yacob. A única tradução ao inglês foi feita em 1976 pelo professor universitário e padre canadense Claude Sumner. Ele a publicou como parte de uma obra em cinco volumes sobre a filosofia etíope, que foi lançada pela nada comercial editora Commercial Printing Press, de Adis Abeba.

O livro foi traduzido ao alemão e, no ano passado, ao norueguês, mas ainda é basicamente impossível ter acesso a uma versão em inglês.

A filosofia não era novidade na Etiópia antes de Yacob. Por volta de 1510, “The Book of the Wise Philosophers” (o livro dos filósofos sábios) foi traduzido e adaptado ao etíope pelo egípcio Abba Mikael. Trata-se de uma coletânea de ditados de filósofos gregos pré-socráticos, Platão e Aristóteles por meio dos diálogos neoplatônicos, e também foi influenciado pela filosofia arábica e as discussões etíopes.

Em sua “Hatäta”, Yacob critica seus contemporâneos por não pensarem de modo independente e aceitarem as palavras de astrólogos e videntes só porque seus predecessores o faziam. Em contraste, ele recomenda uma investigação baseada na razão e na racionalidade científica, considerando que todo ser humano nasce dotado de inteligência e possui igual valor.

Longe dele, mas enfrentando questões semelhantes, estava o francês Descartes (1596-1650). Uma diferença filosófica importante entre eles é que o católico Descartes criticou explicitamente os infiéis e ateus em sua obra “Meditações Metafísicas” (1641).

Essa perspectiva encontra eco na “Carta sobre a Tolerância” (1689), de Locke, para quem os ateus não devem ser tolerados.

As “Meditações” de Descartes foram dedicadas “ao reitor e aos doutores da sagrada Faculdade de Teologia em Paris”, e sua premissa era “aceitar por meio da fé o fato de que a alma humana não morre com o corpo e de que Deus existe”.

Yacob, pelo contrário, propõe um método muito mais agnóstico, secular e inquisitivo —o que também reflete uma abertura ao pensamento ateu. O quarto capítulo da “Hatäta” começa com uma pergunta radical: “Tudo que está escrito nas Sagradas Escrituras é verdade?” Ele prossegue pontuando que todas as diferentes religiões alegam que sua fé é a verdadeira:

“De fato, cada uma delas diz: ‘Minha fé é a certa, e aqueles que creem em outra fé creem na falsidade e são inimigos de Deus’. (…) Assim como minha fé me parece verdadeira, outro considera verdadeira sua própria fé; mas a verdade é uma só”.

Assim, ele deslancha um discurso iluminista sobre a subjetividade da religião, mas continua a crer em algum tipo de criador universal. Sua discussão sobre a existência de Deus é mais aberta que a de Descartes e talvez mais acessível aos leitores de hoje, como quando incorpora perspectivas existencialistas:

“Quem foi que me deu um ouvido com o qual ouvir, quem me criou como ser reacional e como cheguei a este mundo? De onde venho? Tivesse eu vivido antes do criador do mundo, teria conhecido o início de minha vida e da consciência de mim mesmo. Quem me criou?”.


No capítulo cinco, Yacob aplica a investigação racional a leis religiosas diferentes. Critica igualmente o cristianismo, o islã, o judaísmo e as religiões indianas.

Ele aponta, por exemplo, que o criador, em sua sabedoria, fez o sangue fluir mensalmente do útero das mulheres, para que elas possam gestar filhos. Assim, conclui que a lei de Moisés, segundo a qual as mulheres são impuras quando menstruam, contraria a natureza e o criador, já que “constitui um obstáculo ao casamento e a toda a vida da mulher, prejudica a lei da ajuda mútua, interdita a criação dos filhos e destrói o amor”.

Desse modo, inclui em seu argumento filosófico a perspectiva da solidariedade, da mulher e do afeto. E ele próprio viveu segundo esses ideais.

Ilustração de capa da Ilustríssima, por Fabio Zimbres

Depois de sair da caverna, pediu em casamento uma moça pobre chamada Hirut, criada de uma família rica. O patrão dela dizia que uma empregada não estava em pé de igualdade com um homem erudito, mas a visão de Yacob prevaleceu. Consumada a união, ele declarou que ela não deveria mais ser serva, mas seu par, porque “marido e mulher estão em pé de igualdade no casamento”.

Contrastando com essas posições, Kant (1724-1804) escreveu um século mais tarde em “Observações sobre o Sentimento do Belo e do Sublime” (1764): “Uma mulher pouco se constrange com o fato de não possuir determinados entendimentos”.

E, nos ensaios de ética do alemão, lemos que “o desejo de um homem por uma mulher não se dirige a ela como ser humano, pelo contrário, a humanidade da mulher não lhe interessa; o único objeto de seu desejo é o sexo dela”.

Yacob enxergava a mulher sob ótica completamente diferente: como par intelectual do filósofo.

Ele também foi mais iluminista que seus pares do Iluminismo no tocante à escravidão. No capítulo cinco, Yacob combate a ideia de que “possamos sair e comprar um homem como se fosse um animal”. Assim, ele propõe um argumento universal contra a discriminação:

“Todos os homens são iguais na presença de Deus; e todos são inteligentes, pois são suas criaturas; ele não destinou um povo à vida, outro à morte, um à misericórdia e outro ao julgamento. Nossa razão nos ensina que esse tipo de discriminação não pode existir”.

As palavras “todos os homens são iguais” foram escritas décadas antes de Locke (1632-1704), o pai do liberalismo, ter empunhado sua pena.

E a teoria do contrato social de Locke não se aplicava a todos na prática: ele foi secretário durante a redação das “Constituições Fundamentais da Carolina” (1669), que concederam aos homens brancos poder absoluto sobre seus escravos africanos. O próprio inglês investiu no comércio negreiro transatlântico.

Comparada à de seus pares filosóficos, portanto, a filosofia de Yacob frequentemente parece o epítome dos ideais que em geral atribuímos ao Iluminismo.


Alguns meses depois de ler a obra de Yacob, enfim tive acesso a outro livro raro: uma tradução dos escritos reunidos do filósofo Anton Amo (c. 1703-55), que nasceu e morreu em Gana.

Amo estudou e lecionou por duas décadas nas maiores universidades da Alemanha (como Halle e Jena), escrevendo em latim. Hoje, segundo o World Library Catalogue, só um punhado de exemplares de seu “Antonius Guilielmus Amo Afer of Axim in Ghana” está disponível em bibliotecas mundo afora.

O ganês nasceu um século após Yacob. Consta que ele foi sequestrado do povo akan e da cidade litorânea de Axim quando era pequeno, possivelmente para ser vendido como escravo, sendo levado a Amsterdã, para a corte do duque Anton Ulrich de Braunschweig-Wolfenbüttel —visitada com frequência pelo polímata G. W. Leibniz (1646-1716).

Batizado em 1707, Amo recebeu educação de alto nível, aprendendo hebraico, grego, latim, francês e alemão —e provavelmente sabia algo de sua língua materna, o nzema.

Tornou-se figura respeitada nos círculos acadêmicos. No livro de Carl Günther Ludovici sobre o iluminista Christian Wolff (1679-1754) —seguidor de Leibniz e fundador de várias disciplinas acadêmicas na Alemanha—, Amo é descrito como um dos wolffianos mais proeminentes.

No prefácio a “Sobre a Impassividade da Mente Humana” (1734), de Amo, o reitor da Universidade de Wittenberg, Johannes Gottfried Kraus, saúda o vasto conhecimento do autor, situa sua contribuição ao iluminismo alemão em um contexto histórico e sublinha o legado africano da Renascença europeia:

“Quando os mouros vindos da África atravessaram a Espanha, trouxeram com eles o conhecimento dos pensadores da Antiguidade e deram muita assistência ao desenvolvimento das letras que pouco a pouco emergiam das trevas”.

O fato de essas palavras terem saído do coração da Alemanha na primavera de 1733 ajuda a lembrar que Amo não foi o único africano a alcançar o sucesso na Europa do século 18.

Na mesma época, Abram Petrovich Gannibal (1696-1781), também sequestrado e levado da África subsaariana, tornava-se general do czar Pedro, o Grande, da Rússia. O bisneto de Gannibal se tornaria o poeta nacional da Rússia, Alexander Pushkin. E o escritor francês Alexandre Dumas (1802-70) foi neto de uma africana escravizada e filho de um general aristocrata negro nascido no Haiti.

Amo tampouco foi o único a levar diversidade e cosmopolitismo a Halle nas décadas de 1720 e 1730. Vários alunos judeus de grande talento estudaram na universidade. O professor árabe Salomon Negri, de Damasco, e o indiano Soltan Gün Achmet, de Ahmedabad, também passaram por lá.


Em sua tese, Amo escreveu explicitamente que havia outras teologias além da cristã, incluindo entre elas a dos turcos e a dos “pagãos”.

Ele discutiu essas questões na dissertação “Os Direitos dos Mouros na Europa”, em 1729. O trabalho não pode ser encontrado hoje, mas, no jornal semanal de Halle de novembro de 1729, há um artigo curto sobre o debate público de Amo. Segundo esse texto, o ganês apresentou argumentos contra a escravidão, aludindo ao direito romano, à tradição e à razão.

Será que Amo promoveu a primeira disputa legal da Europa contra a escravidão? Podemos pelo menos enxergar um argumento iluminista em favor do sufrágio universal, como o que Yacob propusera cem anos antes. Mas essas visões não discriminatórias parecem ter passado despercebidas dos pensadores principais do iluminismo no século 18.

David Hume (1711-76), por exemplo, escreveu: “Tendo a suspeitar que os negros, e todas as outras espécies de homem em geral (pois existem quatro ou cinco tipos diferentes), sejam naturalmente inferiores aos brancos”. E acrescentou: “Nunca houve nação civilizada de qualquer outra compleição senão a branca, nem indivíduo eminente em ação ou especulação”.

Kant levou adiante o argumento de Hume e enfatizou que a diferença fundamental entre negros e brancos “parece ser tão grande em capacidade mental quanto na cor”, antes de concluir, no texto do curso de geografia física: “A humanidade alcançou sua maior perfeição na raça dos brancos”.

Na França, o mais célebre pensador iluminista, Voltaire (1694-1778), não só descreveu os judeus em termos antissemitas, como quando escreveu que “todos eles nascem com fanatismo desvairado em seus corações”; em seu ensaio sobre a história universal (1756), ele afirmou que, se a inteligência dos africanos “não é de outra espécie que a nossa, é muito inferior”.

Como Locke, Voltaire investiu dinheiro no comércio de escravos.


A filosofia de Amo é mais teórica que a de Yacob, mas as duas compartilham uma visão iluminista da razão, tratando todos os humanos como iguais.

Seu trabalho é profundamente engajado com as questões da época, como se vê em seu livro mais conhecido, “Sobre a Impassividade da Mente Humana”, construído com um método de dedução lógica utilizando argumentos rígidos, aparentemente seguindo a linha de sua dissertação jurídica anterior. Aqui ele trata do dualismo cartesiano, a ideia de que existe uma diferença absoluta de substância entre a mente e o corpo.

Em alguns momentos Amo parece se opor a Descartes, como observa o filósofo contemporâneo Kwasi Wiredu. Ele argumenta que Amo se opôs ao dualismo cartesiano entre mente e corpo, favorecendo, em vez disso, a metafísica dos akan e o idioma nzema de sua primeira infância, segundo os quais sentimos a dor com nossa carne (“honem”), e não com a mente (“adwene”).

Ao mesmo tempo, Amo diz que vai tanto defender quanto atacar a visão de Descartes de que a alma (a mente) é capaz de agir e sofrer junto com o corpo. Ele escreve: “Em resposta a essas palavras, pedimos cautela e discordamos: admitimos que a mente atua junto com o corpo graças à mediação de uma união natural. Mas negamos que ela sofra junto com o corpo”.

Amo argumenta que as afirmações de Descartes sobre essas questões contrariam a visão do próprio filósofo francês. Ele conclui sua tese dizendo que devemos evitar confundir as coisas que fazem parte do corpo e da mente. Pois aquilo que opera na mente deve ser atribuído apenas à mente.

Talvez a verdade seja o que o filósofo Justin E. H. Smith, da Universidade de Paris, aponta em “Nature, Human Nature and Human Difference” (natureza, natureza humana e diferença humana, 2015): “Longe de rejeitar o dualismo cartesiano, pelo contrário, Amo propõe uma versão radicalizada dele”.

Mas será possível que tanto Wiredu quanto Smith tenham razão? Por exemplo, será que a filosofia akan tradicional e a língua nzema continham uma distinção cartesiana entre corpo e mente mais precisa que a de Descartes, um modo de pensar que Amo então levou para a filosofia europeia?

Talvez seja cedo demais para sabermos, já que uma edição crítica das obras de Amo ainda aguarda ser publicada, possivelmente pela Oxford University Press.


No trabalho mais profundo de Amo, “Treatise on the Art of Philosophising Soberly and Accurately” (tratado sobre a arte de filosofar com sobriedade e precisão, 1738), ele parece antecipar Kant. O livro trata das intenções de nossa mente e das ações humanas como sendo naturais, racionais ou de acordo com uma norma.

No primeiro capítulo, escrevendo em latim, Amo argumenta que “tudo é passível de ser conhecido como objeto em si mesmo, ou como uma sensação, ou como uma operação da mente”.

Ele desenvolve em seguida, dizendo que “a cognição ocorre com a coisa em si” e afirmando: “O aprendizado real é a cognição das coisas em si. E assim tem sua base na certeza da coisa conhecida”.

Seu texto original diz “omne cognoscibile aut res ipsa”, usando a noção latina “res ipsa” como “coisa em si”.

Hoje Kant é conhecido por seu conceito da “coisa em si” (“das Ding an sich”) em “Crítica da Razão Pura” (1787) —e seu argumento de que não podemos conhecer a coisa além de nossa representação mental dela.

Mas é fato sabido que essa não foi a primeira utilização do termo na filosofia iluminista. Como diz o dicionário Merriam-Webster no verbete “coisa em si”: “Primeira utilização conhecida: 1739”. Mesmo assim, isso foi dois anos depois de Amo ter entregue seu trabalho principal em Wittenberg, em 1737.

À luz dos exemplos desses dois filósofos iluministas, Zera Yacob e Anton Amo, talvez seja preciso repensarmos a Idade da Razão nas disciplinas da filosofia e da história das ideias.

Na disciplina da história, novos estudos comprovaram que a revolução mais bem-sucedida a ter nascido das ideias de liberdade, igualdade e fraternidade se deu no Haiti, não na França. A Revolução Haitiana (1791-1804) e as ideias de Toussaint L’Ouverture (1743″”1803) abriram o caminho para a independência do país, sua nova Constituição e a abolição da escravidão.

Em “Les Vengeurs du Nouveau Monde” (os vingadores do novo mundo, 2004), Laurent Dubois conclui que os acontecimentos no Haiti foram “a expressão mais concreta da ideia de que os direitos proclamados na Declaração dos Direitos do Homem e do Cidadão, de 1789, eram de fato universais”.

Nessa linha, podemos indagar se Yacob e Amo algum dia serão elevados à posição que merecem entre os filósofos da Era das Luzes.


Este texto foi publicado originalmente no site Aeon.

DAG HERBJORNSRUD, 46, é historiador de ideias e fundador do SGOKI (Centro de História Global e Comparativa de Ideias), em Oslo.

CLARA ALLAIN é tradutora.

FABIO ZIMBRES, 57, é quadrinista, designer e artista visual.

LSE Impact Blog – “Six academic writing habits that will boost productivity” (plus other links) — Progressive Geographies

LSE Impact Blog – “Six academic writing habits that will boost productivity” I’m not sure by the notion of ‘productivity’, but there is some good advice here. Here are the headlines: They “time-block” their writing in advance They set themselves artificial deadlines They deliberately seek “flow” (but don’t push themselves if they can’t find it) […]

via LSE Impact Blog – “Six academic writing habits that will boost productivity” (plus other links) — Progressive Geographies

Climate Change – Catastrophic or Linear Slow Progression? (Armstrong Economics)

woolyrhinoIndeed, science was turned on its head after a discovery in 1772 near Vilui, Siberia, of an intact frozen woolly rhinoceros, which was followed by the more famous discovery of a frozen mammoth in 1787. You may be shocked, but these discoveries of frozen animals with grass still in their stomachs set in motion these two schools of thought since the evidence implied you could be eating lunch and suddenly find yourself frozen, only to be discovered by posterity.


The discovery of the woolly rhinoceros in 1772, and then frozen mammoths, sparked the imagination that things were not linear after all. These major discoveries truly contributed to the “Age of Enlightenment” where there was a burst of knowledge erupting in every field of inquisition. Such finds of frozen mammoths in Siberia continue to this day. This has challenged theories on both sides of this debate to explain such catastrophic events. These frozen animals in Siberia suggest strange events are possible even in climates that are not that dissimilar from the casts of dead victims who were buried alive after the volcanic eruption of 79 AD at Pompeii in ancient Roman Italy. Animals can be grazing and then suddenly freeze abruptly. That climate change was long before man invented the combustion engine.

Even the field of geology began to create great debates that perhaps the earth simply burst into a catastrophic convulsion and indeed the planet was cyclical — not linear. This view of sequential destructive upheavals at irregular intervals or cycles emerged during the 1700s. This school of thought was perhaps best expressed by a forgotten contributor to the knowledge of mankind, George Hoggart Toulmin in his rare 1785 book, “The Eternity of the World“:

” ••• convulsions and revolutions violent beyond our experience or conception, yet unequal to the destruction of the globe, or the whole of the human species, have both existed and will again exist ••• [terminating] ••• an astonishing succession of ages.”

Id./p3, 110


In 1832, Professor A. Bernhardi argued that the North Polar ice cap had extended into the plains of Germany. To support this theory, he pointed to the existence of huge boulders that have become known as “erratics,” which he suggested were pushed by the advancing ice. This was a shocking theory for it was certainly a nonlinear view of natural history. Bernhardi was thinking out of the box. However, in natural science people listen and review theories unlike in social science where theories are ignored if they challenge what people want to believe. In 1834, Johann von Charpentier (1786-1855) argued that there were deep grooves cut into the Alpine rock concluding, as did Karl Schimper, that they were caused by an advancing Ice Age.

This body of knowledge has been completely ignored by the global warming/climate change religious cult. They know nothing about nature or cycles and they are completely ignorant of history or even that it was the discovery of these ancient creatures who froze with food in their mouths. They cannot explain these events nor the vast amount of knowledge written by people who actually did research instead of trying to cloak an agenda in pretend science.

Glaciologists have their own word, jökulhlaup(from Icelandic), to describe the spectacular outbursts when water builds up behind a glacier and then breaks loose. An example was the 1922 jökulhlaup in Iceland. Some seven cubic kilometers of water, melted by a volcano under a glacier, had rushed out in a few days. Still grander, almost unimaginably events, were floods that had swept across Washington state toward the end of the last ice age when a vast lake dammed behind a glacier broke loose. Catastrophic geologic events are not generally part of the uniformitarian geologist’s thinking. Rather, the normal view tends to be linear including events that are local or regional in size

One example of a regional event would be the 15,000 square miles of the Channeled Scablands in eastern WashingtonInitially, this spectacular erosion was thought to be the product of slow gradual processes. In 1923, JHarlen Bretz presented a paper to the Geological Society of America suggesting the Scablands were eroded catastrophically. During the 1940s, after decades of arguing, geologists admitted that high ridges in the Scablands were the equivalent of the little ripples one sees in mud on a streambed, magnified ten thousand times. Finally, by the 1950s, glaciologists were accustomed to thinking about catastrophic regional floods. The Scablands are now accepted to have been catastrophically eroded by the “Spokane Flood.” This Spokane flood was the result of the breaching of an ice dam which had created glacial Lake Missoula. Now the United States Geological Survey estimates the flood released 500 cubic miles of water, which drained in as little as 48 hours. That rush of water gouged out millions of tons of solid rock.

When Mount St. Helens erupted in 1980, this too produced a catastrophic process whereby two hundred million cubic yards of material was deposited by volcanic flows at the base of the mountain in just a matter of hours. Then, less than two years later, there was another minor eruption, but this resulted in creating a mudflow, which carved channels through the recently deposited material. These channels, which are 1/40th the size of the Grand Canyon, exposed flat segments between the catastrophically deposited layers. This is what we see between the layers exposed in the walls of the Grand Canyon. What is clear, is that these events were relatively minor compared to a global flood. For example, the eruption of Mount St. Helens contained only 0.27 cubic miles of material compared to other eruptions, which have been as much as 950 cubic miles. That is over 2,000 times the size of Mount St. Helens!

With respect to the Grand Canyon, the specific geologic processes and timing of the formation of the Grand Canyon have always sparked lively debates by geologists. The general scientific consensus, updated at a 2010 conference, maintains that the Colorado River carved the Grand Canyon beginning 5 million to 6 million years ago. This general thinking is still linear and by no means catastrophic. The Grand Canyon is believed to have been gradually eroded. However, there is an example cyclical behavior in nature which demonstrates that water can very rapidly erode even solid rock. An example of this took place in the Grand Canyon region back on June 28th, 1983. There emerged an overflow of Lake Powell which required the use of the Glen Canyon Dam’s 40-foot diameter spillway tunnels for the first time. As the volume of water increased, the entire dam started to vibrate and large boulders spewed from one of the spillways. The spillway was immediately shut down and an inspection revealed catastrophic erosion had cut through the three-foot-thick reinforced concrete walls and eroded a hole 40 feet wide, 32 feet deep, and 150 feet long in the sandstone beneath the dam. Nobody thought such catastrophic erosion that quick was even possible.

Some have speculated that the end of the Ice Age resulted in a flood of water which had been contained by an ice dam. Like that of the Scablands, it is possible that a sudden catastrophic release of water originally carved the Grand Canyon. It is clear that both the formation of the Scablands and the evidence of how Mount St Helens unfolded, may be support for the catastrophic formation of events rather than nice, slow, and linear formations.

Then there is the Biblical Account of the Great Flood and Noah. Noah is also considered to be a Prophet of Islam. Darren Aronofsky’s film Noah was based on the biblical story of Genesis. Some Christians were angry because the film strayed from biblical Scripture. The Muslim-majority countries banned the film Noah from screening in theaters because Noah was a prophet of God in the Koran. They considered it to be blasphemous to make a film about a prophet. Many countries banned the film entirely.

The story of Noah predates the Bible. There exists the legend of the Great Flood rooted in the ancient civilizations of Mesopotamia. The Sumerian Epic of Gilgamesh dates back nearly 5,000 years which is believed to be perhaps the oldest written tale on Earth. Here too, we find an account of the great sage Utnapishtim, who is warned of an imminent flood to be unleashed by wrathful gods. He builds a vast circular-shaped boat, reinforced with tar and pitch, and carries his relatives, grains along with animals. After enduring days of storms, Utnapishtim, like Noah in Genesis, releases a bird in search of dry land. Since there is evidence that there were survivors in different parts of the world, it is merely logical that there should be more than just one.

Archaeologists generally agree that there was a historical deluge between 5,000 and 7,000 years ago which hit lands ranging from the Black Sea to what many call the cradle of civilization, which was the floodplain between the Tigris and Euphrates rivers. The translation of ancient cuneiform tablets in the 19th century confirmed the Mesopotamian Great Flood myth as an antecedent of the Noah story in the Bible.

The problem that existed was the question of just how “great” was the Great Flood? Was it regional or worldwide? The stories of the Great Flood in Western Culture clearly date back before the Bible. The region implicated has long been considered to be the Black Sea. It has been suggested that the water broke through the land by Istanbul and flooded a fertile valley on the other side much as we just looked at in the Scablands. Robert Ballard, one of the world’s best-known underwater archaeologists, who found the Titanic, set out to test that theory to search for an underwater civilization. He discovered that some four hundred feet below the surface, there was an ancient shoreline, proving that there was a catastrophic event did happen in the Black Sea. By carbon dating shells found along the underwater shoreline, Ballard dated this catastrophic event to around 5,000 BC. This may match around the time when Noah’s flood could have occurred.

Given the fact that for the entire Earth to be submerged for 40 days and 40 nights is impossible for that much water to simply vanish, we are probably looking at a Great Flood that at the very least was regional. However, there are tales of the Great Floodwhich spring from many other sources. Various ancient cultures have their own legends of a Great Flood and salvation. According to Vedic lore, a fish tells the mythic Indian king Manu of a Great Flood that will wipe out humanity. In turn, Manu also builds a ship to withstand the epic rains and is later led to a mountaintop by the same fish.

We also find an Aztec story that tells of a devout couple hiding in the hollow of a vast tree with two ears of corn as divine storms drown the wicked of the land. Creation myths from Egypt to Scandinavia also involve tidal floods of all sorts of substances purging and remaking the earth. The fact that we have Great Flood stories from India is not really a surprise since there was contact between the Middle East and India throughout recorded history. However, the Aztec story lacks the ship, but it still contains punishing the wicked and here there was certainly no direct contact, although there is evidence of cocaine use in Egypt implying there was some trade route probably through island hopping in the Pacific to the shores of India and off to Egypt. Obviously, we cannot rule out that this story of the Great Flood even made it to South America. 

Then again, there is the story of Atlantis – the island that sunk beath the sea. The Atlantic Ocean covers approximately one-fifth of Earth’s surface and second in size only to the Pacific Ocean. The ocean’s name, derived from Greek mythology, means the “Sea of Atlas.” The origin of names is often very interesting clues as well. For example. New Jersey is the English Translation of Latin Nova Caesarea which appeared even on the colonial coins of the 18th century. Hence, the state of New Jersey is named after the Island of Jersey which in turn was named in the honor of Julius Caesar. So we actually have an American state named after the man who changed the world on par with Alexander the Great, for whom Alexandria of Virginia is named after with the location of the famous cemetery for veterans, where John F. Kennedy is buried.

So here the Atlantic Ocean is named after Atlas and the story of Atlantis. The original story of Atlantis comes to us from two Socratic dialogues called Timaeus and Critias, both written about 360 BC by the Greek philosopher Plato. According to the dialogues, Socrates asked three men to meet him: Timaeus of Locri, Hermocrates of Syracuse, and Critias of Athens. Socrates asked the men to tell him stories about how ancient Athens interacted with other states. Critias was the first to tell the story. Critias explained how his grandfather had met with the Athenian lawgiver Solon, who had been to Egypt where priests told the Egyptian story about Atlantis. According to the Egyptians, Solon was told that there was a mighty power based on an island in the Atlantic Ocean. This empire was called Atlantis and it ruled over several other islands and parts of the continents of Africa and Europe.

Atlantis was arranged in concentric rings of alternating water and land. The soil was rich and the engineers were technically advanced. The architecture was said to be extravagant with baths, harbor installations, and barracks. The central plain outside the city was constructed with canals and an elaborate irrigation system. Atlantis was ruled by kings but also had a civil administration. Its military was well organized. Their religious rituals were similar to that of Athens with bull-baiting, sacrifice, and prayer.

Plato told us about the metals found in Atlantis, namely gold, silver, copper, tin and the mysterious Orichalcum. Plato said that the city walls were plated with Orichalcum (Brass). This was a rare alloy metal back then which was found both in Crete as well as in the Andes, in South America. An ancient shipwreck was discovered off the coast of Sicily in 2015 which contained 39 ingots of Orichalcum. Many claimed this proved the story of AtlantisOrichalcum was believed to have been a gold/copper alloy that was cheaper than gold, but twice the value of copper. Of course, Orichalcum was really a copper-tin or copper-zinc brass. We find in Virgil’s Aeneid, the breastplate of Turnus is described as “stiff with gold and white orichalc”.

The monetary reform of Augustus in 23BC reintroduced bronze coinage which had vanished after 84BC. Here we see the introduction of Orichalcum for the Roman sesterius and the dupondius. The Roman As was struck in near pure copper. Therefore, about 300 years after Plato, we do see Orichalcum being introduced as part of the monetary system of Rome. It is clear that Orichalcum was rare at the time Plato wrote this. Consequently, this is similar to the stories of America that there was so much gold, they paved the streets with it.

As the story is told, Atlantis was located in the Atlantic Ocean. There have been bronze-age anchors discovered at the Gates of Hercules (Straights of Gibralter) and many people proclaimed this proved Atlantis was real. However, what these proponents fail to take into account is the Minoans. The Minoans were perhaps the first International Economy. They traded far and wide even with Britain seeking tin to make bronze – henceBronze Age. Their civilization was of the Bronze Age rising civilization that arose on the island of Crete and flourished from approximately the 27th century BC to the 15th century BC – nearly 12,000 years. Their trading range and colonization extended to Spain, Egypt, Israel (Canaan), Syria (Levantine), Greece, Rhodes, and of course to Turkey (Anatolia). Many other cultures referred to them as the people from the islands in the middle of the sea. However, the Minoans had no mineral deposits. They lacked gold as well as silver or even the ability to produce large mining of copper. They appear to have copper mines in Anatolia (Turkey) in colonized cities. What has survived are examples of copper ingots that served as MONEY in trade. Keep in mind that gold at this point was rare, too rare to truly serve as MONEY. It is found largely as jewelry in tombs of royal dignitaries.

The Bronze Age emerged at different times globally appearing in Greece and China around 3,000BC but it came late to Britain reaching there about 1900BC. It is known that copper emerged as a valuable tool in Anatolia (Turkey) as early as 6,500BC, where it began to replace stone in the creation of tools. It was the development of casting copper that also appears to aid the urbanization of man in Mesopotamia. By 3,000BC, copper is in wide use throughout the Middle East and starts to move up into Europe. Copper in its pure stage appears first, and tin is eventually added creating actual bronze where a bronze sword would break a copper sword. It was this addition of tin that really propelled the transition of copper to bronze and the tin was coming from England where vast deposits existed at Cornwall. We know that the Minoans traveled into the Atlantic for trade. Anchors are not conclusive evidence of Atlantis.

As the legend unfolds, Atlantis waged an unprovoked imperialistic war on the remainder of Asia and Europe. When Atlantis attacked, Athens showed its excellence as the leader of the Greeks, the much smaller city-state the only power to stand against Atlantis. Alone, Athens triumphed over the invading Atlantean forces, defeating the enemy, preventing the free from being enslaved, and freeing those who had been enslaved. This part may certainly be embellished and remains doubtful at best. However, following this battle, there were violent earthquakes and floods, and Atlantis sank into the sea, and all the Athenian warriors were swallowed up by the earth. This appears to be almost certainly a fiction based on some ancient political realities. Still, the explosive disappearance of an island some have argued is a reference to the eruption of MinoanSantorini. The story of Atlantis does closely correlate with Plato’s notions of The Republic examining the deteriorating cycle of life in a state.


There have been theories that Atlantiswas the Azores, and still, others argue it was actually South America. That would explain to some extent the cocaine mummies in Egypt. Yet despite all these theories, usually, when there is an ancient story, despite embellishment, there is often a grain of truth hidden deep within. In this case, Atlantis may not have completely submerged, but it could have partially submerged from an earthquake at least where some people survived. Survivors could have made to either the Americas or to Africa/Europe. What is clear, is that a sudden event could have sent a  tsunami into the Mediterranean which then broke the land mass at Istanbul and flooded the valley below transforming this region into the Black Sea becoming the story of Noah.

We also have evidence which has surfaced that the Earth was struck by a comet around 12,800 years ago. Scientific American has published that sediments from six sites across North America—Murray Springs, Ariz.; Bull Creek, Okla.; Gainey, Mich.; Topper, S.C.; Lake Hind, Manitoba; and Chobot, Alberta, have yielded tiny diamonds, which only occur in sediment exposed to extreme temperatures and pressures. The evidence surfacing implies that the Earth moved into an Ice Age killing off large mammals and setting the course for Global Cooling for the next 1300 years. This may indeed explain that catastrophic freezing of Wooly Mammoths in Siberia. Such an event could have also been responsible for the legend of Atlantis where the survivors migrated taking their stories with them.

There is also evidence surfacing from stone carvings at one of the oldest sites recorded located in Anatolia (Turkey). Using a computer programme to show where the constellations would have appeared above Turkey thousands of years ago, researchers were able to pinpoint the comet strike to 10,950BC, the exact time the Younger Dryas,which was was a return to glacial conditions and Global Cooling which temporarily reversed the gradual climatic warming after the Last Glacial Maximum that began to recede around 20,000 BC, utilizing ice core data from Greenland.

Now, there is a very big asteroid which passed by the Earth on September 16th, 2013. What is most disturbing is the fact that its cycle is 19 years so it will return in 2032. Astronomers have not been able to swear it will not hit the Earth on the next pass in 2032. It was discovered by Ukrainian astronomers with just 10 days to go back in 2013.  The 2013 pass was only a distance of 4.2 million miles (6.7 million kilometers). If anything alters its orbit, then it will get closer and closer. It just so happens to line up on a cyclical basis that suggests we should begin to look at how to deflect asteroids and soon.

It definitely appears that catastrophic cooling may also be linked to the Earth being struck by a meteor, asteroids, or a comet. We are clearly headed into a period of Global Cooling and this will get worse as we head into 2032. The question becomes: Is our model also reflecting that it is once again time for an Earth change caused by an asteroid encounter? Such events are not DOOMSDAY and the end of the world. They do seem to be regional. However, a comet striking in North America would have altered the comet freezing animals in Siberia.

If there is a tiny element of truth in the story of Atlantis, the one thing it certainly proves is clear – there are ALWAYS survivors. Based upon a review of the history of civilization as well as climate, what resonates profoundly is that events follow the cyclical model of catastrophic occurrences rather than the linear steady slow progression of evolution.

Steven Pinker talks Donald Trump, the media, and how the world is better off today than ever before (ABC Australia)


“By many measures of human flourishing the state of humanity has been improving,” renowned cognitive scientist Steven Pinker says, a view often in contrast to the highlights of the 24-hour news cycle and the recent “counter-enlightenment” movement of Donald Trump.

“Fewer of us are dying of disease, fewer of us are dying of hunger, more of us are living in democracies, were more affluent, better educated … these are trends that you can’t easily appreciate from the news because they never happen all at once,” he says.

Canadian-American thinker Steven Pinker is the author of Bill Gates’s new favourite book — Enlightenment Now — in which he maintains that historically speaking the world is significantly better than ever before.

But he says the media’s narrow focus on negative anomalies can result in “systematically distorted” views of the world.

Speaking to the ABC’s The World program, Mr Pinker gave his views on Donald Trump, distorted perceptions and the simple arithmetic that proves the world is better than ever before.

Donald Trump’s ‘counter-enlightenment’

“Trumpism is of course part of a larger phenomenon of authoritarian populism. This is a backlash against the values responsible for the progress that we’ve enjoyed. It’s a kind of counter-enlightenment ideology that Trumpism promotes. Namely, instead of universal human wellbeing, it focusses on the glory of the nation, it assumes that nations are in zero-sum competition against each other as opposed to cooperating globally. It ignores the institutions of democracy which were specifically implemented to avoid a charismatic authoritarian leader from wielding power, but subjects him or her to the restraints of a governed system with checks and balances, which Donald Trump seems to think is rather a nuisance to his own ability to voice the greatness of the people directly. So in many ways all of the enlightenment forces we have enjoyed, are being pushed back by Trump. But this is a tension that has been in play for a couple of hundred years. No sooner did the enlightenment happen that a counter-enlightenment grew up to oppose it, and every once in a while it does make reappearances.”

News media can ‘systematically distort’ perceptions

“If your impression of the world is driven by journalism, then as long as various evils haven’t gone to zero there’ll always be enough of them to fill the news. And if journalism isn’t accompanied by a bit of historical context, that is not just what’s bad now but how bad it was in the past, and statistical context, namely how many wars? How many terrorist attacks? What is the rate of homicide? Then our intuitions, since they’re driven by images and narratives and anecdotes, can be systematically distorted by the news unless it’s presented in historical and statistical context.

‘Simple arithmetic’: The world is getting better

“It’s just a simple matter of arithmetic. You can’t look at how much there is right now and say that it is increasing or decreasing until you compare it with how much took place in the past. When you look at how much took place in the past you realise how much worse things were in the 50s, 60s, 70s and 80s. We don’t appreciate it now when we concentrate on the remaining horrors, but there were horrific wars such as the Iran-Iraq war, the Soviets in Afghanistan, the war in Vietnam, the partition of India, the Bangladesh war of independence, the Korean War, which killed far more people than even the brutal wars of today. And if we only focus on the present, we ought to be aware of the suffering that continues to exist, but we can’t take that as evidence that things have gotten worse unless we remember what happened in the past.”

Don’t equate inequality with poverty

“Globally, inequality is decreasing. That is, if you don’t look within a wealthy country like Britain or the United States, but look across the globe either comparing countries or comparing people worldwide. As best as we can tell, inequality is decreasing because so many poor countries are getting richer faster than rich countries are getting richer. Now within the wealthy countries of the anglosphere, inequality is increasing. And although inequality brings with it a number of serious problems such as disproportionate political power to the wealthy. But inequality itself is not a problem. What we have to focus on is the wellbeing of those at the bottom end of the scale, the poor and the lower middle class. And those have not actually been decreasing once you take into account government transfers and benefits. Now this is a reason we shouldn’t take for granted, the important role of government transfers and benefits. It’s one of the reasons why the non-English speaking wealthy democracies tend to have greater equality than the English speaking ones. But we shouldn’t confuse inequality with poverty.”

Distant tropical storms have ripple effects on weather close to home (Science Daily)

Researchers describe a breakthrough in making accurate predictions of weather weeks ahead

February 20, 2018
Colorado State University
Researchers report a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.

Storm clouds (stock image). Credit: © mdesigner125 / Fotolia

The famously intense tropical rainstorms along Earth’s equator occur thousands of miles from the United States. But atmospheric scientists know that, like ripples in a pond, tropical weather creates powerful waves in the atmosphere that travel all the way to North America and have major impacts on weather in the U.S.

These far-flung, interconnected weather processes are crucial to making better, longer-term weather predictions than are currently possible. Colorado State University atmospheric scientists, led by professors Libby Barnes and Eric Maloney, are hard at work to address these longer-term forecasting challenges.

In a new paper in npj Climate and Atmospheric Science, the CSU researchers describe a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.

According to the study, led by former graduate researcher Bryan Mundhenk, the model, using both these phenomena, allows skillful prediction of the behavior of major rain storms, called atmospheric rivers, three and up to five weeks in advance.

“It’s impressive, considering that current state-of-the-art numerical weather models, such as NOA’s Global Forecast System, or the European Centre for Medium-Range Weather Forecasts’ operational model, are only skillful up to one to two weeks in advance,” says paper co-author Cory Baggett, a postdoctoral researcher in the Barnes and Maloney labs.

The researchers’ chief aim is improving forecast capabilities within the tricky no-man’s land of “subseasonal to seasonal” timescales: roughly three weeks to three months out. Predictive capabilities that far in advance could save lives and livelihoods, from sounding alarms for floods and mudslides to preparing farmers for long dry seasons. Barnes also leads a federal NOAA task force for improving subseasonal to seasonal forecasting, with the goal of sharpening predictions for hurricanes, heat waves, the polar vortex and more.

Atmospheric rivers aren’t actual waterways, but”rivers in the sky,” according to researchers. They’re intense plumes of water vapor that cause extreme precipitation, plumes so large they resemble rivers in satellite pictures. These “rivers” are responsible for more than half the rainfall in the western U.S.

The Madden-Julian Oscillation is a cluster of rainstorms that moves east along the Equator over 30 to 60 days. The location of the oscillation determines where atmospheric waves will form, and their eventual impact on say, California. In previous work, the researchers have uncovered key stages of the Madden-Julian Oscillation that affect far-off weather, including atmospheric rivers.

Sitting above the Madden-Julian Oscillation is a very predictable wind pattern called the quasi-biennial oscillation. Over two- to three-year periods, the winds shift east, west and back east again, and almost never deviate. This pattern directly affects the Madden-Julian Oscillation, and thus indirectly affects weather all the way to California and beyond.

The CSU researchers created a model that can accurately predict atmospheric river activity in the western U.S. three weeks from now. Its inputs include the current state of the Madden-Julian Oscillation and the quasi-biennial oscillation. Using information on how atmospheric rivers have previously behaved in response to these oscillations, they found that the quasi-biennial oscillation matters — a lot.

Armed with their model, the researchers want to identify and understand deficiencies in state-of-the-art numerical weather models that prevent them from predicting weather on these subseasonal time scales.

“It would be worthwhile to develop a good understanding of the physical relationship between the Madden-Julian Oscillation and the quasi-biennial oscillation, and see what can be done to improve models’ simulation of this relationship,” Mundhenk said.

Another logical extension of their work would be to test how well their model can forecast actual rainfall and wind or other severe weather, such as tornadoes and hail.

Journal Reference:

  1. Bryan D. Mundhenk, Elizabeth A. Barnes, Eric D. Maloney, Cory F. Baggett. Skillful empirical subseasonal prediction of landfalling atmospheric river activity using the Madden–Julian oscillation and quasi-biennial oscillation. npj Climate and Atmospheric Science, 2018; 1 (1) DOI: 10.1038/s41612-017-0008-2

How can I become a fossil? (BBC Future)

How to be fossilized (Credit: Getty) Less than one-10th of 1% of all species that have ever lived became fossils. But from skipping a coffin to avoiding Iran, there are ways to up your chances of lasting forever.

By John Pickrell

15 February 2018

Every fossil is a small miracle. As author Bill Bryson notes in his book A Short History of Nearly Everything, only an estimated one bone in a billion gets fossilised. By that calculation the entire fossil legacy of the 320-odd million people alive in the US today will equate to approximately 60 bones – or a little over a quarter of a human skeleton.

But that’s just the chance of getting fossilised in the first place. Assuming this handful of bones could be buried anywhere in the US’s 9.8 million sq km (3.8 million square miles), then the chances of anyone finding these bones in the future are almost non-existent.

Fossilisation is so unlikely that scientists estimate that less one-tenth of 1% of all the animal species that have ever lived have become fossils. Far fewer of them have been found.

As humans, we have a couple of things going for us: we have hard skeletons and we’re relatively large. So we’re much more likely to make it than a jellyfish or a worm. There are things, however, you can do to increase your chances of success.

Taphonomy is the study of burial, decay and preservation – the entire process of what happens after an organism dies and eventually becomes a fossil. To answer the question of how to become a fossil, BBC Future spoke with some of the world’s top taphonomists.

1. Get buried, and quickly

“It’s really a question of maintaining a good condition of the body after death – long enough to be buried under sediment and then altered physically and chemically deep underground to become a fossil,” says Sue Beardmore, a taphonomist and collections assistant at the Oxford University Museum of Natural History.

“To be preserved for millions of years, you must also survive the first hours, days, seasons, decades, centuries, and thousands of years,” adds Susan Kidwell, a professor at the University of Chicago. “That is, you must survive the initial transition from the ‘taphonomically active zone’… to a zone of permanent burial, where your remains are unlikely to be exhumed.”

There are almost endless ways that fossilisation can fail. Many of these happen at, or down to 20-50cm below, the soil or seafloor surface. You don’t want your remains to be eaten and scattered by scavengers, for example, or exposed to the elements for too long. And you don’t want them to be bored into or shifted around by burrowing animals.

The sand and mud deposits of Canada’s Badlands quickly buried bones The sand and mud deposits of Canada’s Badlands quickly buried bones, making the area one of the world’s richest hunting grounds for dinosaur fossils (Credit: Getty)

When it comes to rapid burial, sometimes natural disasters can help – such as floods that dump huge amounts of sediment or volcanic eruptions that smother things in mud and ash. “One theory for the occurrence of dinosaur bone beds is firstly drought conditions, that killed the dinosaurs, followed by floods that moved the sediments to bury them,” Beardmore says.

Of course, the fact that human bodies are typically buried six feet under (unless cremated) gives you another leg up here. But that isn’t enough on its own.

2. Find some water

Obviously the first step is dying, but you can’t die just anywhere. Picking the perfect environment is key. Water is one important thing to consider. If you die in a dry environment, once you’ve been picked over by scavengers, your bones will probably weather away at the surface. Instead, most experts agree you need to get swiftly smothered in sand, mud and sediments – and the best places for that are lakes, floodplains and rivers, or the bottom of the sea.

“The palaeoenvironments that we often see the best fossils come out of are lake and river systems,” says Caitlin Syme, a taphonomist at the University of Queensland in Brisbane, Australia. The important thing is the rate at which fresh sediments are burying things. She recommends rivers flowing from mountains which cause erosion and therefore carry a lot of sediment. Another option is a coastal delta or floodplain, where river sediment is rapidly dumped as the water heads out to sea.

Ideally, you also want an ‘anoxic’ environment: one very low in oxygen, where animals and microorganisms that would digest and disturb your remains can’t survive.

Kidwell recommends avoiding about 50cm below the seafloor, “the maximum burrowing depth of shrimp, crabs and worms that might irrigate the sediments with oxygenated water”, which would promote decomposition and stir up the body.

“You want to end up quickly after death in a spot that is relatively low elevation, so that it is a sink for sediment, and preferably with standing water – a pond, lake, estuary or ocean – so that anoxic conditions might develop,” she says.

A 150 million year old archaeopteryx (Credit: Getty)

Choose the right conditions and you, too, could be preserved for as long as this 150 million year old archaeopteryx (Credit: Getty)

In rare cases, fossils created in these kind of still, anoxic conditions preserve their soft tissues like skin, feathers and internal organs. Examples include the many exquisite feathered dinosaurs from China or the Bavarian quarries that produced the fossils of the earliest bird, archaeopteryx.

Once your fossil gets below the biologically active surface layer, then it’s stable and will continue to be buried more deeply as further sediments accumulate, Kidwell says. “The risk for destruction then shifts to a completely different geological timescale, namely that of tectonism.”

The question, then, is how long before the sediments encasing the corpse are turned to more permanent stone… and are lifted by geological activity to a height where erosion can expose the remains.

3. Skip the coffin

Now we come to the thorny technicality of what a fossil actually is – and what kind of fossil you want your body to be.

Very generally, anything up to around 50,000 years old is what’s known as a ‘subfossil’. These are largely still made up of the original tissues of the organism. Extinct Pleistocene megafauna found in caves – such as giant ground sloths in South America, cave bears in Europe, and marsupial lions in Australia – are good examples.

However, if you want your remains to become a fossil that lasts for millions of years, then you really want minerals to seep through your bones and replace them with harder substances. This process, known as ‘permineralisation’, is what typically creates a fully-fledged fossil. It can take millions of years.

As a result, you might skip the coffin. Bones permineralise most rapidly when mineral-rich water can flow through them, imbuing them with things like iron and calcium. A coffin might keep the skeleton nicely together, but it would interfere with this process.

There is a way a coffin might work, though. Mike Archer, a palaeontologist at the University of New South Wales, suggests burial in a concrete coffin filled with sand and with hundreds of 5mm holes drilled into the sides. This then needs to be buried deep enough that groundwater can pass through.

“If you want to be a classic bony fossil, a bit like something from Dinosaur Provincial Park in Canada, then something like a [coarse] river sand would be pretty good,” says Syme. “All the soft tissues would be destroyed and you’d be left with this beautifully articulated skeleton.”

In terms of the minerals, calcium ions which can precipitate into calcite, a form of calcium carbonate, are especially good. “These can start to cement or cover the body which will protect it in the long run, because given time it will most likely be buried at a greater depth,” Syme says.

Deliberately seeding your corpse with the appropriate minerals, such as calcite or gypsum, might be a way to accelerate this. Encouraging the growth of tough iron-rich minerals would also be sensible as they withstand weathering well in the long run.

If you want to personalise your fossil further, add colour with some copper

If you want to personalise your fossil further, add colour with some copper (Credit: Alamy)

Silicates, from the sand, are also a nice durable mineral to have incorporated. Archer even suggests getting buried with copper strips and nickel pellets if you fancy fossilised bones and teeth with a nice blue-green colour to them.

4. Avoid the edges of tectonic plates

If you made it through the first few hundred thousand years and minerals begin to replace your bones, congratulations! You’ve successfully become a fossil. As sediments build up on top and you get pushed deeper into Earth’s crust, the heat and pressure will aid the process further.

But it’s not a done deal yet. Your fossil might still shift to such depths that it could be melted by the Earth’s heat and pressure.

Don’t want that to happen? Steer clear of the edges of tectonic plates, where the crust is going to eventually get sucked under the surface. One such subduction zone is Iran, where the Eurasian Plate is rising over the Iranian Plate.

5. Get discovered

Now you need to think about the potential for rediscovery.

If you want somebody to chance upon your carefully preserved fossil one day, you need to plan for burial in a spot that currently is low enough to accumulate the necessary sediments for deep burial – but that will eventually be pushed up again. In other words, you need a place with uplift where weathering and erosion will eventually scour off the surface layers, exposing you.

The Dead Sea may be a good place to preserve your fossil

Good for more than floating, the Dead Sea may be a good place to preserve your fossil (Credit: Getty)

One good spot might be the Mediterranean Sea, Syme says; it’s getting shallower as Africa is pushed towards Europe. Other small, inland seas that will fill with sediment are good bets, too.

“Perhaps the Dead Sea,” she says. “The high salt would preserve and pickle you.”

6. Or go rogue

We’ve covered the standard method for hard, durable fossils with bone largely replaced by rock. But there are some oddball methods to consider, too.

Top of the list is amber. There are astounding fossils perfectly preserved in this gemstone made of tree resin – such as recent finds of birds, lizards and even a feathered dinosaur tail in Myanmar. “If you can find a large enough amount of tree sap and get covered in amber, that’s going to be the best way to preserve your soft tissues as well as your bones,” Syme says. “But it’s obviously pretty difficult for such a large animal.”

Can’t find enough amber? The next option is tar pits of the kind that have preserved sabre-toothed cats and mammoths at La Brea in Los Angeles. Although here you would mostly likely end up disarticulated, your bones jumbled in with other animals. There’s also freezing on a mountain or in a glacier, like Ötzi the iceman, found in the European Alps in 1991.

Where Ötzi the iceman met his fate

Where Ötzi the iceman met his fate may not seem very comfortable, but it proved key for preserving his remains (Credit: Alamy)

Another route might be natural mummification, with your body left to dry in a cave system. “There are a lot of cave system remains that get covered with calcium from groundwater, which also forms stalactites and stalagmites,” Syme says. “People like caving and so if the cave systems still exist in the future, they might happen upon you.”

One final method to preserve your corpse almost indefinitely, though not in the form of a fossil, would be launching you into space – or leaving you on the surface of a geologically inert celestial body with no atmosphere, such as the Moon.

“The vacuum of space would be very good if you want your body to remain perpetually non-decaying,” Syme says. She adds that you could attach a radio beacon if you want to get found again in the distant future.

7. Leave a little something extra

Assuming you are found millions of years hence, what else might be preserved alongside you?

Plastics (fidget spinners, anyone?), other oil-derived products that don’t biodegrade and inert metals, like alloys, gold and rare metals of the kind found in mobile phones, all might last as long.

Will mobile phones be one of the artefacts we leave for future generations?

Will mobile phones be one of the artefacts we leave for generations far in the future? (Credit: Getty)

Glass is durable too, and can withstand high temperatures and pressures. You can imagine finding the “outlines or shape of smartphones,” Syme says. Archer notes that the durability of glass means you could chisel ‘ENJOY!’ on a small sheet of glass in a concrete coffin with your body and it would be there to find with your fossil.

“To be 100% sure I would use diamond,” Syme adds – it’s immensely stable. Using a laser, you could etch a letter explaining the lengths you went to to get fossilised.

If you also want to pre-plan your archaeological context, Syme believes bitumen highways and the foundations of skyscrapers are contenders. “We’ve dug down deep into the ground to build these things. You’ll be able to see… the layouts of cities still there,” she says.

Remember, the words you write will fade and your deeds will be forgotten. But a fossil? That, perhaps, could last forever.

María Apaza: la mujer de 91 años que dialoga con los Apus (El Comércio Perú)

La nación Q’ero, en el alto Cusco, es considerada el último ayllu de los incas y su autoridad espiritual máxima es una mujer de 91 años llamada María Apaza, la interlocutora de los Apus

Maria Apaza

María Apaza tiene 91 años y es la única mujer altomisayoc (sacerdotisa espiritual) de la nación Q’ero. El grupo Rimayni la trajo a Lima para que celebre ceremonias y retiros de iniciación. (Foto: Hugo Pérez)

Ana Núñez

Hasta 300 millones de voltios podrían haber pasado por el cuerpo de María Apaza aquella tarde de 1943, lo suficiente para encender una bombilla de cien vatios durante todo un año. El rayo cayó sobre la joven de 16 años mientras pastaba a sus animales en las alturas de Paucartambo (Cusco). Debía haber muerto ese día María, pero estaba destinada a ser una altomisayoc (máxima sacerdotisa de la nación Q’ero) y sobrevivir al doloroso beso del rayo era solo una señal de ese sino.

De hecho, pocos días después de que recibió la fuerte descarga, un pampamisayoc (sacerdote sanador de los Q’eros) lo pudo leer en la hoja de coca: María había sido elegida entre los hombres y las mujeres de las comunidades herederas de la sangre y tradiciones de los incas para ser la sacerdotisa sagrada que puede tener contacto directo con los Apus, para soportar el poder de fuerzas que ningún otro ser humano podría soportar, y para limpiar, sanar y recargar energías con sus cuyas (piedras).

Recorrer ese camino no fue fácil. Antes de poder soportar la fuerza de los Apus, María pasó por un proceso en el que diferentes pampamisayoc realizaron hasta doce ceremonias de Karpay (rito de iniciación). Según su tradición, si el rayo te elige como altomisayoc, primero te mata, luego te desarticula y finalmente te resucita, así es que esas ceremonias intentaban integrar sus ‘partes disgregadas’.

María recién pudo soportar la fuerza de los Apus el día que fue a la fiesta del Quyllurit’i, en las faldas del nevado del Ausangate (Cusco).

La magia de los mitos andinos ha sido parte de la vida de María Apaza. En la comunidad de Kiko, donde nació, es común escuchar a los pobladores narrar historias en quechua sobre el día que “Mamá María se fue volando con el cóndor” o la vez que “se la llevó el viento”. En su familia, dicen incluso que hubo ocasiones en las que la altomisayoc desapareció y la encontraron varias semanas después durmiendo bajo un árbol, lo que era interpretado como que ella se había ido a otro plano en tiempo y espacio.

Cuentan que cuando los Apus se comunican con María hay muestras tangibles de ese contacto con la naturaleza: vuela el cóndor, ruge el puma, el colibrí se queda estático y habla el viento.

Esta semana María Apaza llegó a Lima junto a su hijo Alejandro, un pampamisayoc, y otros miembros de tres generaciones de su familia que son parte del linaje de los Apaza. En cumplimiento de sus profecías, los Q’ero han abierto su cultura, ofreciendo su sabiduría y espiritualidad al mundo.

María solo habla quechua, pero sabe reconocer los corazones. La altomisayoc mide menos de metro y medio, tiene los ojos dulces pero profundos y si le pides que cante, soltará una de esas melodías andinas que son dulces y tristes. Si la miras de lejos, puedes pensar que se trata solo de una abuelita andina. Pero si te fijas bien, te darás cuenta de que tiene las piernas tan jóvenes como las tuyas. Con esas piernas, María aún sube a las montañas, se enfrenta cara a cara con los Apus y hasta pide a la Pachamama que nos cuide.

Lea la nota completa mañana en la edición impresa de la revista Somos.

Interdisciplinary approach yields new insights into human evolution (Vanderbilt University)


Vanderbilt biologist Nicole Creanza Nicole Creanza takes interdisciplinary approach to human evolution as guest editor of Royal Society journal

The evolution of human biology should be considered part and parcel with the evolution of humanity itself, proposes Nicole Creanza, assistant professor of biological sciences. She is the guest editor of a new themed issue of the Philosophical Transactions of the Royal Society B, the oldest scientific journal in the world, that focuses on an interdisciplinary approach to human evolution.

Stanford professor Marc Feldman and Stanford postdoc Oren Kolodny collaborated with Creanza on the special issue.

“Within the blink of an eye on a geological timescale, humans advanced from using basic stone tools to examining the rocks on Mars; however, our exact evolutionary path and the relative importance of genetic and cultural evolution remain a mystery,” said Creanza, who specializes in the application of computational and theoretical approaches to human and cultural evolution, particularly language development. “Our cultural capacities-to create new ideas, to communicate and learn from one another, and to form vast social networks-together make us uniquely human, but the origins, the mechanisms, and the evolutionary impact of these capacities remain unknown.”

The special issue brings together researchers in biology, anthropology, archaeology, economics, psychology, computer science and more to explore the cultural forces affecting human evolution from a wider perspective than is usually taken.

“Researchers have begun to recognize that understanding non-genetic inheritance, including culture, ecology, the microbiome, and regulation of gene expression, is fundamental to fully comprehending evolution,” said Creanza. “It is essential to understand the dynamics of cultural inheritance at different temporal and spatial scales, to uncover the underlying mechanisms that drive these dynamics, and to shed light on their implications for our current theory of evolution as well as for our interpretation and predictions regarding human behavior.”

In addition to an essay discussing the need for an interdisciplinary approach to human evolution, Creanza included an interdisciplinary study of her own, examining the origins of English’s contribution to Sranan, a creole that emerged in Suriname following an influx of indentured servants from England in the 17th century.

Creanza, along with linguists Andre Sherriah and Hubert Devonish of the University of the West Indes and psychologist Ewart Thomas from Stanford, sought to determine the geographic origins of the English speakers whose regional dialects formed the backbone of Sranan. Their work combined linguistic, historical and genetic approaches to determine that the English speakers who influenced Sranan the most originated largely from two counties on opposite sides of southern England: Bristol, in the west, and Essex, in the east.

“Thus, analyzing the features of modern-day languages might give us new information about events in human history that left few other traces,” Creanza said.

So killer whales can talk. Welcome to a brave new world of cross-species chat (The Guardian)


Wikie the orca is more mimic than raconteur, but the potential is awesome. Imagine dolphins tackling politicians on pollution

A killer whale.

Abridge in cultures has occurred. A cognitive chasm between intelligent creatures has been crossed. Of all the spectacular times for you to be alive, you happen to have been born in an age when killer whales started talking to the damn dirty apes who were willing to listen. Though this sounds like some sort of sci-fi dream/nightmare, I am here to assure you that this is real. Remain calm, but stay vigilant around all marine mammals at this time. We may be in for a rocky time, as you shall discover.

Let us begin by examining the facts. First, it’s true. As you may have heard by now, a captive killer whale called Wikie, housed at Marineland in Antibes, France, is uttering noises that mimic the human sounds “Hello” and “Bye-bye” as well as “One, two, three” plus, apparently, the haunting word “Amy” – the name of its trainer. Predictably, within hours of the release of the scientific paper, Wikie has become something of an online celebrity.

This week, after the news broke about Wikie’s great feat, a number of vocal animal welfare charities were calling for her release from captivity. This troubled me a little. Really? I thought. Is that really a good idea?

Killer whales (like all dolphins) are adept at horizontal learning, after all. They copy one another. They have sounds for objects, possibly names. They have dialects. They transmit behaviours. In other words, they have culture like we do. Might the once captive Wikie somehow spoil their untamed wildness with her newly learned human vernacular? What if this captive dolphin, somehow released into the wild with a human greeting (“Hello!”) should corrupt the wild dolphins it comes across? What then? I dread to think, but the idea is entertaining to consider so let us do just that.

Let us imagine pods of wild dolphins screaming “Goodbye” at boatloads of tourists that encroach on their hunting grounds each year. Imagine them saying “Bye-bye” to trawlers. Imagine them ruining countless nature documentaries by screaming “Hello” to BBC camera crews while filming.

And what if Wikie and her kind later develop sarcasm? Can you imagine, in an age where our oceans become bereft and depleted of nutrition, the words “So long and thanks for all the fish!”, delivered in a sarcastic tone? In a perverse sort of way, I suspect Douglas Adams would have laughed long and loud at this idea. And then wept.

Listen to killer whales mimicking human voices – audio

But there are positives to this possible cross-species dialogue, and perhaps it is this potential that we should focus on. Imagine a non-human animal that could speak up – in human words – against the degradation of a vast ecosystem like that of the oceans? In such a world, perhaps modern politics would find itself a new enemy in marine mammals like Wikie. One can imagine, for instance, in some alternative universe, a language-endowed Wikie being invited to speak at Davos or some other God-awful international event.

One can imagine the soundbites (“Amy?”); the 7.45am BBC Breakfast interview; the cosy press conferences with Wikie, wide-eyed in a giant blow-up birthing pool in front of the cameras, next to a shady foreign president secretly plotting her kind’s political downfall while sipping imported water from a non-recyclable plastic bottle. (While writing this it strikes me how, in moments like these, just how so many of us would side with these talkative killer whales). But alas, such imaginative scenarios are just that – imaginative.

You knew this bit was coming. It is time to burst the bubble about this female killer whale. Wikie has a kind of magic about her, but it is not yet a two-way conversation. She is a mimic, pure and simple and she is hungry for her fish rewards. In the same way as a 14-year-old can armpit-fart his way through Bach’s Fifth Symphony to achieve 1,000-plus views on YouTube, without ever truly knowing Bach, this killer whale has hit upon a neat trick for reward by exhaling in a measured way that sounds a little like human voice.

But that doesn’t make the science hogwash. Far from it. It’s a beginning. And all scientific journeys have a beginning. We’ll need wild, untainted, unspoiled populations to test ideas on. We need to get away from fish rewards. We need to move away from captive research. This is a start. It’s not the end. They may one day talk with us, but not like this.

And so, in my wildest dreams it won’t be a “bye-bye” or a “hello” that curries favour with an intelligent species such as the killer whale, but a word of more depth: a word like “friend” or “partner” or “respect”. And further down the line maybe we could manage something else. Dialogue. Truth. Meaning.

As of recent times, these are no longer uniquely human concepts when it comes to zoology. Welcome to the brave new world. You happen to be alive in it. But who else is listening? Increasingly, we shall get to decide. Bye-bye, or hello: you and I get to choose.

Jules Howard is a zoologist and the author of Sex on Earth, and Death on Earth

Orcas can imitate human speech, research reveals (The Guardian)

Killer whales able to copy words such as ‘hello’ and ‘bye bye’ as well as sounds from other orcas, study shows

High-pitched, eerie and yet distinct, the sound of a voice calling the name “Amy” is unmistakable. But this isn’t a human cry – it’s the voice of a killer whale called Wikie.

New research reveals that orcas are able to imitate human speech, in some cases at the first attempt, saying words such as “hello”, “one, two” and “bye bye”.

The study also shows that the creatures are able to copy unfamiliar sounds produced by other orcas – including a sound similar to blowing a raspberry.

Scientists say the discovery helps to shed light on how different pods of wild killer whales have ended up with distinct dialects, adding weight to the idea that they are the result of imitation between orcas. The creatures are already known for their ability to copy the movements of other orcas, with some reports suggesting they can also mimic the sounds of bottlenose dolphins and sea lions.

“We wanted to see how flexible a killer whale can be in copying sounds,” said Josep Call, professor in evolutionary origins of mind at the University of St Andrews and a co-author of the study. “We thought what would be really convincing is to present them with something that is not in their repertoire – and in this case ‘hello’ [is] not what a killer whale would say.”

Wikie is not the first animal to have managed the feat of producing human sounds: dolphins, elephants, parrots, orangutans and even beluga whales have all been captured mimicking our utterances, although they use a range of physical mechanisms to us to do so. Noc, the beluga whale, made novel use of his nasal cavities, while Koshik, an Indian elephant jammed his trunk in his mouth, resulting in the pronouncement of Korean words ranging from “hello” to “sit down” and “no”.

But researchers say only a fraction of the animal kingdom can mimic human speech, with brain pathways and vocal apparatus both thought to determine whether it is possible.

“That is what makes it even more impressive – even though the morphology [of orcas] is so different, they can still produce a sound that comes close to what another species, in this case us, can produce,” said Call.

He poured cold water, however, on the idea that orcas might understand the words they mimic. “We have no evidence that they understand what their ‘hello’ stands for,” he said.

Writing in the journal Proceedings of the Royal Society B: Biological Sciences, researchers from institutions in Germany, UK, Spain and Chile, describe how they carried out the latest research with Wikie, a 14-year-old female orca living in an aquarium in France. She had previously been trained to copy actions performed by another orca when given a human gesture.

After first brushing up Wikie’s grasp of the “copy” command, she was trained to parrot three familiar orca sounds made by her three-year old calf Moana.

Wikie was then additionally exposed to five orca sounds she had never heard before, including noises resembling a creaking door and the blowing a raspberry.

Finally, Wikie was exposed to a human making three of the orca sounds, as well as six human sounds, including “hello”, “Amy”, “ah ha”, “one, two” and “bye bye”.

“You cannot pick a word that is very complicated because then I think you are asking too much – we wanted things that were short but were also distinctive,” said Call.

Throughout the study, Wikie’s success was first judged by her two trainers and then confirmed from recordings by six independent adjudicators who compared them to the original sound, without knowing which was which.

The team found that Wikie was often quickly able to copy the sounds, whether from an orca or a human, with all of the novel noises mimicked within 17 trials. What’s more, two human utterances and all of the human-produced orca sounds were managed on the first attempt – although only one human sound – “hello” – was correctly produced more than 50% of the time on subsequent trials.

The matching was further backed up through an analysis of various acoustic features from the recordings of Wikie’s sounds.

While the sounds were all made and copied when the animals’ heads were out of the water, Call said the study shed light on orca behaviour.

“I think here we have the first evidence that killer whales may be learning sounds by vocal imitation, and this is something that could be the basis of the dialects we observe in the wild – it is plausible,” said Call, noting that to further test the idea, trials would have to be carried out with wild orcas.

Diana Reiss, an expert in dolphin communication and professor of psychology at Hunter College, City University of New York, welcomed the research, noting that it extends our understanding of orcas’ vocal abilities, with Wikie able to apply a “copy” command learned for imitation of actions to imitation of sounds.

Dr Irene Pepperberg, an expert in parrot cognition at Harvard University, also described the study as exciting, but said: “A stronger test would have been whether the various sounds produced could be correctly classified by humans without the models present for comparison.”

Language is learned in brain circuits that predate humans (Georgetown University)



WASHINGTON — It has often been claimed that humans learn language using brain components that are specifically dedicated to this purpose. Now, new evidence strongly suggests that language is in fact learned in brain systems that are also used for many other purposes and even pre-existed humans, say researchers in PNAS (Early Edition online Jan. 29).

The research combines results from multiple studies involving a total of 665 participants. It shows that children learn their native language and adults learn foreign languages in evolutionarily ancient brain circuits that also are used for tasks as diverse as remembering a shopping list and learning to drive.

“Our conclusion that language is learned in such ancient general-purpose systems contrasts with the long-standing theory that language depends on innately-specified language modules found only in humans,” says the study’s senior investigator, Michael T. Ullman, PhD, professor of neuroscience at Georgetown University School of Medicine.

“These brain systems are also found in animals – for example, rats use them when they learn to navigate a maze,” says co-author Phillip Hamrick, PhD, of Kent State University. “Whatever changes these systems might have undergone to support language, the fact that they play an important role in this critical human ability is quite remarkable.”

The study has important implications not only for understanding the biology and evolution of language and how it is learned, but also for how language learning can be improved, both for people learning a foreign language and for those with language disorders such as autism, dyslexia, or aphasia (language problems caused by brain damage such as stroke).

The research statistically synthesized findings from 16 studies that examined language learning in two well-studied brain systems: declarative and procedural memory.

The results showed that how good we are at remembering the words of a language correlates with how good we are at learning in declarative memory, which we use to memorize shopping lists or to remember the bus driver’s face or what we ate for dinner last night.

Grammar abilities, which allow us to combine words into sentences according to the rules of a language, showed a different pattern. The grammar abilities of children acquiring their native language correlated most strongly with learning in procedural memory, which we use to learn tasks such as driving, riding a bicycle, or playing a musical instrument. In adults learning a foreign language, however, grammar correlated with declarative memory at earlier stages of language learning, but with procedural memory at later stages.

The correlations were large, and were found consistently across languages (e.g., English, French, Finnish, and Japanese) and tasks (e.g., reading, listening, and speaking tasks), suggesting that the links between language and the brain systems are robust and reliable.

The findings have broad research, educational, and clinical implications, says co-author Jarrad Lum, PhD, of Deakin University in Australia.

“Researchers still know very little about the genetic and biological bases of language learning, and the new findings may lead to advances in these areas,” says Ullman. “We know much more about the genetics and biology of the brain systems than about these same aspects of language learning. Since our results suggest that language learning depends on the brain systems, the genetics, biology, and learning mechanisms of these systems may very well also hold for language.”

For example, though researchers know little about which genes underlie language, numerous genes playing particular roles in the two brain systems have been identified. The findings from this new study suggest that these genes may also play similar roles in language. Along the same lines, the evolution of these brain systems, and how they came to underlie language, should shed light on the evolution of language.

Additionally, the findings may lead to approaches that could improve foreign language learning and language problems in disorders, Ullman says.

For example, various pharmacological agents (e.g., the drug memantine) and behavioral strategies (e.g., spacing out the presentation of information) have been shown to enhance learning or retention of information in the brain systems, he says. These approaches may thus also be used to facilitate language learning, including in disorders such as aphasia, dyslexia, and autism.

“We hope and believe that this study will lead to exciting advances in our understanding of language, and in how both second language learning and language problems can be improved,” Ullman concludes.

What happens to language as populations grow? It simplifies, say researchers (Cornell)



ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.

Their study was published in the Proceedings of the Royal Society B: Biological Sciences.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.

The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.

But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”

His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.

Cultura primata (Revista Fapesp)

Transmissão de práticas de uso de ferramentas por macacos-prego ajuda a repensar o papel das tradições na evolução


Podcast: Eduardo Ottoni

Com uma pedra erguida acima da cabeça, o jovem Porthos bate vigorosamente no chão arenoso de modo a abrir um buraco. Seu objetivo: uma aranha, que logo consegue desentocar e rola entre as mãos para tontear a presa que em seguida come. Ele é um macaco-prego da espécie Sapajus libidinosus, habitante do Parque Nacional Serra da Capivara, no Piauí, e objeto de estudo de pesquisadores do Instituto de Psicologia da Universidade de São Paulo (IP-USP). O biólogo Tiago Falótico tem caracterizado o uso de ferramentas por esses animais (ver Pesquisa FAPESP nº 196) e mostrou, em artigo publicado em julho na revista Scientific Reports, que a ação do jovem macho envolve conhecimento, aprendizado e transmissão de práticas culturais – ou tradições, como alguns preferem chamar quando os sujeitos não são humanos – dentro de grupos sociais. A pesquisa está no bojo de um corpo teórico que busca entrelaçar biologia, ciências sociais e humanas e recém-desembocou na formação da Sociedade de Evolução Cultural. Sua reunião inaugural acaba de acontecer na Alemanha, entre 13 e 15 de setembro.

Até agora, o uso de pedras como ferramentas para cavar só foi documentado nessa população. Especialmente quando se trata de desentocar aranhas, é preciso experiência. O estudo, resultado de observações feitas durante o doutorado de Falótico, encerrado em 2011 sob orientação do biólogo Eduardo Ottoni, mostra que quase 60% dos adultos e jovens (como Porthos) têm sucesso na tarefa. Macacos juvenis (o correspondente a crianças), por outro lado, só conseguem em pouco mais de 30% dos casos. Isso acontece porque é preciso reconhecer o revestimento de seda que fecha a toca do aracnídeo, sinal de que o habitante está lá dentro. “Os juvenis às vezes cavam uma toca que acabou de ser aberta por outro macaco”, conta Falótico. Estruturas subterrâneas, parecidas com batatas, da planta conhecida como farinha-seca (Thiloa glaucocarpa), também são desenterradas com mais eficiência pelos adultos. Já as raízes de louro (Ocotea), outro alimento desses primatas, apesar de envolverem o uso de pedras maiores, não parecem apresentar um desafio especial para os aprendizes. Macacos dos dois sexos se mostraram igualmente capazes de cavar com pedras, com uma taxa de sucesso equivalente, embora eles pareçam ter mais interesse pela atividade: entre as 1.702 situações observadas, 77% envolviam machos e apenas 23%, fêmeas.

“Esperávamos encontrar uma correlação entre o uso de ferramentas e a escassez de alimentos, mas não foi o que vimos”, conta Falótico. Se os macacos da serra da Capivara encontram algo comestível que exija o uso de ferramentas, recorrem a elas. Seu modo de vida, em que passam metade do tempo no chão rodeados de pedras e gravetos, parece ser propício ao desenvolvimento das habilidades. Mas não é só isso. Embora não haja diferença entre os sexos nos hábitos alimentares, as fêmeas nunca usam gravetos – que seus companheiros masculinos utilizam para desentocar lagartos de frestas e retirar insetos de troncos, por exemplo. Há diferença apenas, aparentemente, no interesse. “Quando um macho vê outro usar uma vareta, ele observa atento; já uma fêmea, mesmo que esteja ao lado daquele usando a ferramenta, não se interessa e olha para o outro lado!”

Os macacos da mesma espécie que habitam a fazenda Boa Vista, em Gilbués, cerca de 300 quilômetros (km) para sudoeste, têm tradições distintas no uso de ferramentas. Ali, uma área com mais influência de Cerrado do que Caatinga, as pedras são menos abundantes, mas necessárias (e usadas) para quebrar cocos. Gravetos estão por toda parte, mas não têm uso. Essa diferença cultural entre grupos de macacos foi explorada em um experimento feito pelo psicólogo Raphael Moura Cardoso durante o doutorado, orientado por Eduardo Ottoni, e relatado em artigo de 2016 na Biology Letters. Eles puseram – tanto na fazenda Boa Vista como na serra da Capivara – caixas de acrílico recheadas de melado de cana. O único jeito de retirar a guloseima era por meio de uma fenda no alto com largura suficiente apenas para varetas. “Na serra da Capivara, um macho logo acertou uma pedrada na caixa”, lembra Ottoni, que, previdente, tinha planejado o aparato “à prova de macaco-prego”. “Quando nada aconteceu, ele largou a pedra, coçou a cabeça e pegou um graveto.” Ele brinca que nem precisou editar o vídeo para mostrar em um congresso – foi uma ação contínua e imediata. Ao longo de cinco dias de exposição à caixa, 10 dos 14 machos usaram o graveto logo na primeira sessão, e apenas os três mais jovens não foram bem-sucedidos. Os demais conseguiram um sucesso de 90% na empreitada. As fêmeas não tentaram, assim como os macacos da fazenda Boa Vista. Lá, os pesquisadores até tentaram ajudar: depois de seis horas expostos à tarefa, os macacos deparavam com um graveto já fincado na fenda. Mesmo tirando e lambendo o melado da ponta, nenhum deles voltou a inserir a ferramenta na caixa ao longo de 13 dias de experimento. Uma surpresa foi que os macacos da Boa Vista, exímios quebradores de coco, não tentaram partir a caixa. “Eu esperava isso deles, não dos outros”, diz Ottoni.

Aprendizado social

Os resultados, surpreendentes, podem reforçar a importância da transmissão de tradições entre os macacos. A capa da edição de 25 de julho deste ano da revista PNAS traz justamente a foto de um macaco-prego da fazenda Boa Vista comendo uma castanha que conseguiu quebrar com a ajuda de uma grande pedra redonda, observado de perto por um jovem. A imagem anuncia a coletânea especial sobre como a cultura se conecta à biologia, da qual faz parte um artigo do grupo liderado pelas primatólogas Patrícia Izar, do IP-USP, Dorothy Fragaszy, da Universidade da Georgia, nos Estados Unidos, e Elisabetta Visalberghi, do Instituto de Ciências e Tecnologias Cognitivas, na Itália, sobre os macacos da fazenda Boa Vista, que estudam sistematicamente desde 2006. Nas observações recolhidas ao longo desse tempo, chama a atenção a tolerância dos adultos em relação aos jovens aprendizes que olham de perto e até comem pedaços dos cocos partidos. “Os adultos competem pelos recursos e os imaturos podem ficar perto”, conta Patrícia. As análises publicadas no artigo recente mostram muito mais do que proximidade: os quebradores de coco influenciam a atividade dos outros, sobretudo os jovens, que também começam a manipular pedras e cocos. Isso dura alguns minutos. “A tradição canaliza a atividade para o mesmo tipo de ação importante para essa tradição”, define.

Patrícia ressalta que os macacos nascem nesse contexto. “Muitas vezes vemos filhotes nas costas das mães enquanto elas quebram”, conta. Com esse aprendizado contínuo, acabam se tornando especialistas na tarefa. Mas não basta observar, e daí a importância de os filhotes serem atraídos pela ação dos adultos – principalmente os mais eficazes. “O sucesso passa pela percepção da tarefa e das propriedades da ferramenta”, detalha, descrevendo um complexo corpo-ferramenta em que é constantemente necessário ajustar força, gestos e postura. Quando quebram tucum, um coquinho menos resistente, os macacos ajustam a força das pancadas depois de ouvirem o som da superfície rachando, o grupo mostrou em artigo do ano passado na Animal Behaviour. Para cocos mais difíceis, eles escolhem pedras que podem chegar a ser mais pesadas do que o próprio corpo. E a seleção da pedra é criteriosa, conforme mostrou um experimento em que Patrícia e seu grupo forneceram pedras artificiais com diferentes tamanhos, pesos e densidades. As pedras grandes logo atraíam a atenção dos macacos, mas se fossem pouco densas – mais leves do que aparentavam – eram abandonadas. “Eles têm a percepção de que o peso é importante na quebra”, diz Patrícia.

Tolerância: macho adulto da fazenda Boa Vista come castanha partida observado de perto por jovem

Essas sociedades primatas alteram o ambiente. Macacos escolhem pedras ou troncos achatados como base para quebrar coco, e para lá carregam as raras pedras grandes e duras que encontram no ambiente. Essa conformação é importante não só por criar oficinas de quebra, mas por canalizar a possibilidade de aprendizado, já que todos sabem onde a atividade acontece e pode ser observada. “Não faz sentido pensar em maturação motora independente do contexto social, alimentar”, afirma a bióloga Briseida Resende, também do IP-USP e coautora do artigo da PNAS. O desenvolvimento individual depende das experiências de cada um, de suas capacidades físicas e do acervo acumulado pelo grupo, no qual uma inovação criada pode se disseminar, perpetuar-se e fazer parte da cultura mantida por gerações. Resende defende que indivíduo e sociedade são indissociáveis, embora historicamente tenham sido vistos como entidades distintas.

Teoria revista

Reunir a evolução cultural e a biológica é justamente o foco da síntese estendida, agora sedimentada com a fundação, em 2016, da Sociedade de Evolução Cultural – o primeiro presidente é o zoólogo Peter Richerson, da Universidade da Califórnia em Davis, cujo grupo privilegia a estatística. Essa visão conjunta amplia o olhar evolutivo, já que a transmissão de ideias ou inovações não se dá apenas de pais para filhos e pode trazer vantagens seletivas favorecendo as capacidades cognitivas e sociais relevantes. Considera também que a cultura pode influenciar aspectos físicos, como a conformação e o tamanho do cérebro, ou o desenvolvimento de habilidades que por sua vez sedimentam o comportamento. Os genes e a cultura, duas vias de transmissão de informação, relacionam-se, portanto, por uma via de mão dupla.

Jovens aprendizes tentam tirar proveito de escavação feita por fêmea

A oportunidade de ver comportamentos surgirem e se espalhar é rara, e por isso abordagens experimentais que provocam inovações são um acréscimo importante aos comportamentos diversos dos macacos-prego do Piauí. Ferramentas estatísticas recentes podem ajudar a aprofundar essa compreensão, como a Análise de Difusão Baseada em Redes (Network-Based Diffusion Analysis) que o grupo de Ottoni começa a usar. “O programa monta uma rede social aleatória e compara à real”, explica o pesquisador, que torna as análises mais robustas inserindo características medidas nos sujeitos em causa. Em agosto de 2016 ele apresentou, no congresso da Sociedade Primatológica Internacional, em Chicago, resultados do experimento feito pela bióloga Camila Coelho durante doutorado orientado por ele com um período passado na Universidade de Durham, no Reino Unido, para aprender o método. Os resultados indicam que, no caso dos macacos-prego, o aprendizado social prevê a difusão de informação na espécie.

Até meio século atrás, o uso de ferramentas era considerado privilégio humano. Ao observar chimpanzés na Tanzânia, a inglesa Jane Goodall derrubou essa exclusividade e, de certa maneira, causou a redefinição das fronteiras entre gente e bicho. Muito se descobriu de lá para cá, mas falar em cultura animal ainda esbarra em certo desconforto. Talvez não por muito mais tempo.

O uso de pedras para escavar só foi descrito na serra da Capivara

Sob o comando de hormônios

O cuidado com os filhotes está ligado ao hormônio oxitocina em mamíferos. O grupo liderado por Maria Cátira Bortolini, da Universidade Federal do Rio Grande do Sul, descreveu há poucos anos as variações na molécula de oxitocina em espécies de macacos nas quais há bons pais (ver Pesquisa FAPESP 228). Ensaios farmacológicos feitos no laboratório do bioquímico Claudio Costa-Neto, da Faculdade de Medicina de Ribeirão Preto da USP, agora desvendaram o caminho da oxitocina dentro das células e verificaram que os receptores das formas alteradas ficam mais expostos nas membranas das células, de maneira que o sistema não se dessensibiliza. “É como se o macaco recebesse constantemente a instrução ‘tenho que cuidar dos filhotes’”, explica Cátira. Faz diferença para a sobrevivência de saguis, que frequentemente têm filhotes gêmeos, por exemplo.

O resultado está em artigo publicado em agosto na PNAS, que também descreve o resultado da aplicação dessas oxitocinas em ratos por meio de borrifadas nasais, experimento realizado em colaboração com o fisiologista Aldo Lucion, da UFRGS. As fêmeas lactantes, já inundadas de oxitocina, alteraram pouco o comportamento. Mas os machos tratados com o hormônio alteraram radicalmente o hábito de ignorar os filhotes e correram para cheirá-los, uma reação que foi três vezes mais rápida com a oxitocina de sagui.

Os cebídeos, família que inclui os macacos-prego, também têm um tipo de oxitocina que aumenta a propensão à paternidade ativa. Os grupos de Cátira e de Ottoni recentemente iniciaram uma colaboração para investigar as características genéticas em machos mais e menos cuidadores. “Já conseguimos extrair material genético de amostras de fezes e estamos selecionando genes candidatos a serem rastreados”, conta ela, fascinada com a tolerância dos machos e as habilidades cognitivas dos primatas do Piauí. “A capacidade de inovar, por um lado, e a de sentar e observar, por outro, são necessárias para o desenvolvimento e a transmissão de traços culturais adaptativos e certamente há um cenário genético por trás disso.”

1. Uso de ferramentas por macacos-prego (Sapajus libidinosus) selvagens: Ecologia, aprendizagem socialmente mediada e tradições comportamentais (nº 14/04818-0); Modalidade Projeto Temático; Pesquisador responsável Eduardo Benedicto Ottoni (USP); Investimento R$ 609.276,69.2. Variabilidade de comportamento social de macacos-prego (gênero Cebus): Análise comparativa entre populações para investigação de correlatos fisiológicos (nº 08/55684-3); Modalidade Auxílio à Pesquisa – Regular; Pesquisadora responsável Patrícia Izar (USP); Investimento R$ 186.187,33.
3. Desenvolvimento de novos ligantes/drogas com ação agonística seletiva (biased agonism) para receptores dos sistemas renina-angiotensina e calicreínas-cininas: Novas propriedades e novas aplicações biotecnológicas (nº 12/20148-0); ModalidadeProjeto Temático; Pesquisador responsável Claudio Miguel da Costa Neto (USP); Investimento R$ 3.169.674,21.

Artigos científicos
FALÓTICO, T. et alDigging up food: excavation stone tool use by wild capuchin monkeysScientific Reports. v. 7, n. 1, 6278. 24 jul. 2017.
CARDOSO, R. M. e OTTONI, E. B. The effects of tradition on problem solving by two wild populations of bearded capuchin monkeys in a probing task. Biology Letters. v. 12, n. 11, 20160604. nov. 2016.
FRAGASZY, D. M. et alSynchronized practice helps bearded capuchin monkeys learn to extend attention while learning a traditionPNAS. v. 114, n. 30, p. 7798-805. 25 jul. 2017.
MANGALAM, M., Izar, et alTask-specific temporal organization of percussive movements in wild bearded capuchin monkeysAnimal Behaviour. v. 114, p. 129–137. abr. 2016.
PARREIRAS-E-SILVA, L. T. et alFunctional new world monkey oxytocin forms elicit na altered signaling profile and promotes parental care in ratsPNAS. v. 114, n. 34, p. 9044-49. 22 ago. 2017.
VISALBERGHI, E. et al. Selection of effective stone tools by wild bearded capuchin monkeys (Cebus libidinosus)Current Biology, v. 19, n. 3, p. 213-17. 10 fev. 2009.

Médium garante controlar o clima e atendeu governos do Brasil e do exterior (RedeTV!)


Médium garante controlar o clima e atendeu governos do Brasil e do exterior

Em rara aparição na TV, Adelaide Scritori, da Fundação Cacique Cobra Coral, fala sobre a parceira com políticos brasileiros e estrangeiros. Além de garantir que pode controlar o clima, ela mostra documentos para provar que alertou o governo dos EUA sobre o atentado às Torres Gêmeas, Saddam Hussein da Guerra do Golfo e diz que avisou Ayrton Senna sobre o acidente que ele sofreria em Ímola.

Publicada: 19/01/2018


They Hunt. They Gather. They’re Very Good at Talking About Smells (N.Y.Times)

The answer might come down partly to culture, suggests a study published Thursday in Current Biology.

To better understand why the Jahai have this knack with naming smells, researchers compared a different group of hunter-gatherers on the peninsula, the Semaq Beri, with neighbors who are not hunter-gatherers. Even though they shared related languages and a home environment, the Semaq Beri had a superior ability at putting words to odors. These results challenge assumptions that smelling just isn’t something people are good at. They also show how important culture is to shaping who we are — and even what we do with our noses.

[READ: Ancestral Climates May Have Shaped Your Nose]

In the rainforests of the Malay Peninsula, the Semaq Beri, like the Jahai, are hunter-gatherers. But the Semelai, a group that lives nearby, cultivate rice and trade collected forest items.

Semaq Beri members clearing undergrowth in the durian fruit season. A neighboring group, the Semelai, share a related language but were less adept at naming odors they smelled. Credit: Nicole Kruspe

To test their color and odor naming abilities, the researchers asked members of each group to identify colors on swatches and odors trapped inside pens. When it came to naming more than a dozen odors including leather, fish and banana, the differences were clear. The Semaq Beri used particular terms to describe odor qualities. But when the Semelai tried to identify the source, they often got it wrong. The difference between the two groups was as pronounced as the gap in the earlier study between the Jahai and English-speaking Americans.

“I thought the differences would be more subtle between the two groups,” said Nicole Kruspe, a linguist at Lund University in Sweden who co-authored the study.

Perhaps the importance a culture places on odor influences how people describe it. And if you depend on the forest’s produce to live, you may want to know more subtle attributes that indicate origin, safety or quality.

“A cultural preoccupation with odor is useful in the forest with limited vision,” said Dr. Kruspe.

The Semaq Beri value odors as food-locating resources but also as important pieces of life that can indicate a person’s identity and guide taboos and rules for behavior. But “that in itself doesn’t explain it,” Dr. Kruspe said.

[READ: The Nose, an Emotional Time Machine.]

Perhaps well-practiced skills preserved odor-detecting genes or primed brains to be better odor-detectors — which suggests that without continuing to use this ability, it could one day be lost.

Asifa Majid, a linguist at the Max Planck Institute for Psycholinguistics in the Netherlands and co-author of the paper, has also studied hunter-gatherers with comparable skills in Mexico and worries that pressures of globalization may disrupt these lifestyles, limit access to odors and threaten a vibrant odor lexicon.

One way to explore that possibility would be to see what happens to the lexicon for odors of descendants of hunter-gatherers who have been removed from that lifestyle.

“Unfortunately,” said Dr. Kruspe, “we will probably be able to test for that in a couple of generations.”

Distúrbios na academia (Pesquisa Fapesp)

Universidades trabalham no desenvolvimento de estratégias de prevenção e atendimento psicológico de alunos de graduação e pós-graduação




O caso de um estudante de doutorado que se suicidou nos laboratórios do Instituto de Ciências Biomédicas da Universidade de São Paulo (ICB-USP), em agosto deste ano, colocou em evidência a discussão sobre as pressões enfrentadas pelos que optam por seguir a carreira acadêmica e os distúrbios psicológicos relacionados à vida na pós-graduação. Esse é um assunto que aos poucos começa a ser mais discutido no Brasil. No entanto, ainda são poucas as universidades brasileiras que investem na criação de centros de atendimento psicológico aos seus estudantes de graduação e pós-graduação.

O problema é mundial. Na Bélgica, um estudo publicado em maio na revista Research Policy verificou que um terço dos 3.659 estudantes de doutorado das universidades da região de Flandres corria o risco de desenvolver algum tipo de doença psiquiátrica.
Em 2014, um estudo da Universidade da Califórnia em Berkeley, nos Estados Unidos, constatou que 785 (31,4%) de 2.500 estudantes de pós-graduação apresentavam sinais de depressão. O estudo fazia parte de um trabalho mais amplo, desenvolvido desde 1994, quando se constatou que 10% dos pós-graduandos e dos pesquisadores em estágio de pós-doutorado da universidade já haviam considerado se suicidar.

No Reino Unido, um estudo publicado em 2001 na Educational Psychology verificou que 53% dos pesquisadores das universidades britânicas sofriam de algum distúrbio mental, enquanto na Austrália a taxa foi considerada até quatro vezes maior no meio acadêmico em comparação com a população de modo geral. Apesar de se basearem em uma amostra relativamente pequena, esses estudos evidenciam uma preocupação que começa a se tornar latente no meio acadêmico no mundo: estudantes de graduação e pós-graduação estão sujeitos a pressões que podem desencadear uma série de transtornos mentais.

Como nos outros países, no Brasil, a quantidade de estudos, dados e iniciativas envolvendo esse assunto ainda é singela. Em São Paulo, a Universidade Estadual Paulista (Unesp) pretende lançar no início de 2018  o projeto “Bem viver para tod@s”. A iniciativa prevê a realização de palestras e debates com especialistas em saúde mental da própria universidade. “O objetivo é orientar alunos e professores sobre como identificar e lidar com esses problemas”, explica Cleópatra da Silva Planeta, pró-reitora de Extensão Universitária e coordenadora do projeto.

Algumas universidades já contam com serviços de atendimento para seus estudantes. Na Universidade Estadual de Campinas (Unicamp), por exemplo, o Serviço de Assistência Psicológica e Psiquiátrica ao Estudante (Sappe), ligado à Pró-reitoria de Graduação, atua há 30 anos dando assistência psicológica e psiquiátrica aos alunos de graduação e pós-graduação. De acordo com a psiquiatra Tânia Vichi Freire de Mello, coordenadora do Sappe, cerca de 40% dos estudantes da universidade que procuram o serviço estão no mestrado ou doutorado. “A maioria relata experimentar insônia, estresse e ansiedade, além de crises de pânico e depressão”, ela conta. “É comum dizerem que tentam contornar esses problemas a partir do consumo de bebidas alcoólicas e drogas psicoativas, como maconha.”Esses problemas costumam ser resultado de uma convergência de fatores, na concepção do psiquiatra Neury José Botega, da Faculdade de Ciências Médicas (FCM) da Unicamp. Segundo ele, a dinâmica da pós-graduação é marcada por prazos apertados, pressão para publicar artigos, carga de trabalho excessiva e cobranças. “Vários estudantes alegam não conseguir dar conta dos prazos ou saber lidar com o nível de exigência dos professores e orientadores”, comenta. São frequentes os casos de crises de estresse, ansiedade, pânico e depressão. “Muitas vezes a continuidade dos estudos fica inviável e o aluno entra em desespero por não conseguir tocar suas atividades.”

Um relatório divulgado em 2011 pela Associação Nacional dos Dirigentes das Instituições Federais de Ensino Superior (Andifes), que mapeou a vida social, econômica e cultural de quase 20 mil estudantes de graduação das universidades federais brasileiras, verificou que 29% deles já haviam procurado atendimento psicológico e 9%, psiquiátrico, o que envolve problemas mais sérios. O estudo também constatou que 11% já haviam tomado ou estavam tomando medicação psiquiátrica.

Um problema bastante comum entre os estudantes de pós-graduação, segundo Tamara Naiz, presidente da Associação Nacional dos Pós-graduandos (ANPG), é a chamada síndrome de burnout, quando o indivíduo atinge um nível grave de exaustão por trabalhar demais sem descansar. Há também a síndrome do impostor, que aflige acadêmicos que não conseguem aceitar os resultados alcançados como mérito próprio. “O desenvolvimento de transtornos na pós-graduação é um reflexo dos problemas da academia, que oferece poucas oportunidades”, ela destaca. “Ao mesmo tempo, as exigências e pressões envolvendo prazos curtos para qualificação e defesa, cobrança excessiva ou injusta por publicações em revistas de alto impacto, contribuem para agravar esse quadro.”

Também a relação com o orientador pode contribuir para o desenvolvimento de distúrbios psicológicos. Vários são os casos registrados pela ANPG de atitudes abusivas ou negligentes relatados por estudantes que sofreram assédio moral durante reuniões ou aulas. Igualmente frequentes são os casos que chegam à ANPG de orientadores omissos diante de questões ligadas à pesquisa de seus orientandos ou aqueles que solicitam aos alunos tarefas não relacionadas às suas pesquisas. Em outros casos, os relatos são de corte de bolsas e reprovação não justificadas ou com justificativas falsas ou não acadêmicas. Também o assédio sexual, em suas diversas formas, e a discriminação de gênero, que ainda persistem no mundo, são apontados como fatores desencadeadores de distúrbios psicológicos na academia, sobretudo entre as mulheres.

O caso da medicina
A grande maioria dos estudos em epidemiologia psiquiátrica envolvendo o ambiente acadêmico brasileiro está relacionada aos alunos de graduação, sobretudo os de medicina. Isso porque o curso costuma ser caracterizado pela pressão contínua por boas notas e extenuante carga horária de aulas e estudo. Além disso, o ambiente entre os próprios estudantes é marcado pela competitividade desde o vestibular, em geral sempre muito concorrido. Um estudo publicado em 2013 na Revista Brasileira de Educação Médica por pesquisadores da Universidade Federal da Paraíba (UFPB), em João Pessoa, envolvendo 384 estudantes de medicina, verificou que 33,6% tinham algum tipo de transtorno mental, como ansiedade, depressão e somatoformes, doenças que persistem apesar de as desordens físicas não explicarem a natureza e extensão dos sintomas nem o sofrimento ou as preocupações do indivíduo.Segundo a médica psiquiatra Laura Helena Andrade, do Instituto de Psiquiatria da Faculdade de Medicina (FM) da USP, a dificuldade na administração do tempo, o contato diário com a morte, o medo de adquirir doenças ou cometer erros e o sentimento de impotência diante de certas enfermidades contribuem para que esses estudantes estejam mais suscetíveis ao desenvolvimento de transtornos mentais. “O aluno da área da saúde precisa ter mais resiliência para poder manter seu desempenho de estudo, pesquisa e atendimento às pessoas enfermas”, ela ressalta. Apenas nos últimos cinco anos, a Universidade Federal de São Carlos (UFSCar) registrou 22 tentativas de suicídio envolvendo alunos de medicina, segundo dados publicados em setembro no jornal O Estado de S. Paulo. Já nas universidades federais de São Paulo (Unifesp) e do ABC (UFABC), cinco estudantes se suicidaram no mesmo período.

Isso tem estimulado algumas universidades brasileiras a investirem na criação de núcleos de prevenção e atendimento psicológico específico para esses estudantes. Na Unicamp, há o Grupo de Apoio aos Estudantes de Graduação em Medicina, Fonoaudiologia e Residentes (Grapeme) da FCM. Já a USP conta desde 1986 com o Grupo de Assistência Psicológica ao Aluno (Grapal), entidade dedicada ao atendimento dos alunos dos cursos de fisioterapia, fonoaudiologia, medicina e terapia ocupacional, além dos residentes da FM-USP. Desde agosto a Universidade Federal de Minas Gerais (UFMG) tem dois núcleos de atendimento psicológico aos estudantes de graduação e pós-graduação.

Paralelamente, essas instituições estão trabalhando para capacitar professores para que possam se antecipar a esses problemas. Segundo Tania Vichi Freire de Mello, do Sappe, é importante que eles fiquem atentos a mudanças súbitas de comportamento de seus alunos ou queda no rendimento acadêmico. A busca por orientação ou tratamento psicológico pode evitar que o estudante abandone o curso. A conclusão é de um levantamento feito em 2016 que analisou o perfil de 1.237 alunos que passaram pelo atendimento do Sappe. No estudo, eles verificaram que a taxa de evasão de curso entre os atendidos pelo serviço era menor quando comparada com aqueles que não recorreram ao serviço.

Para Botega, da FCM-Unicamp, é importante que os professores se mostrem mais abertos para conversar sobre esse assunto com seus alunos, sem desmerecer suas angústias. “Em geral, os professores estão mais preocupados com o desempenho acadêmico de seus estudantes, sem se darem conta de que isso está relacionado à sanidade mental do aluno”, afirma o psiquiatra. “É preciso agir no sentido de acolher esses estudantes, orientá-los e, se for preciso, encaminhá-los aos serviços de atendimento”, destaca Botega.

Walter Benjamin’s thirteen rules for writing (Progressive Geographies)

Posted on  by

Walter Benjamin’s rules for writing – something I shared in the early days of this blog, but worth doing so again.

I. Anyone intending to embark on a major work should be lenient with themselves and, having completed a stint, deny themselves nothing that will not prejudice the next.

II. Talk about what you have written, by all means, but do not read from it while the work is in progress. Every gratification procured in this way will slacken your tempo. If this régime is followed, the growing desire to communicate will become in the end a motor for completion.

III. In your working conditions avoid everyday mediocrity. Semi-relaxation, to a background of insipid sounds, is degrading. On the other hand, accompaniment by an etude or a cacophony of voices can become as significant for work as the perceptible silence of the night. If the latter sharpens the inner ear, the former acts as a touchstone for a diction ample enough to bury even the most wayward sounds.

IV. Avoid haphazard writing materials. A pedantic adherence to certain papers, pens, inks is beneficial. No luxury, but an abundance of these utensils is indispensable.

V. Let no thought pass incognito, and keep your notebook as strictly as the authorities keep their register of aliens.

VI. Keep your pen aloof from inspiration, which it will then attract with magnetic power. The more circumspectly you delay writing down an idea, the more maturely developed it will be on surrendering itself. Speech conquers thought, but writing commands it.

VII. Never stop writing because you have run out of ideas. Literary honour requires that one break off only at an appointed moment (a mealtime, a meeting) or at the end of the work.

VIII. Fill the lacunae of inspiration by tidily copying out what is already written. Intuition will awaken in the process.

IX. Nulla dies sine linea [“no day without a line” (Apelles ex Pliny)] — but there may well be weeks.

X. Consider no work perfect over which you have not once sat from evening to broad daylight.

XI. Do not write the conclusion of a work in your familiar study. You would not find the necessary courage there.

XII. Stages of composition: idea — style — writing. The value of the fair copy is that in producing it you confine attention to calligraphy. The idea kills inspiration, style fetters the idea, writing pays off style.

XIII. The work is the death mask of its conception.

From “One-Way Street”, Reflections: Essays, Aphorisms, Autobiographical Writings, ed. Peter Demetz, trans. Edmund Jephcott, New York: Schocken, 1978, pp. 80-81.

Algoritmos das rede sociais promovem preconceito e desigualdade, diz matemática de Harvard (BBC Brasil)

AlgoritmosPara Cathy O’Neil, por trás da aparente imparcialidade ddos algoritmos escondem-se critérios nebulosos que agravam injustiças. GETTY IMAGES

Eles estão por toda parte. Nos formulários que preenchemos para vagas de emprego. Nas análises de risco a que somos submetidos em contratos com bancos e seguradoras. Nos serviços que solicitamos pelos nossos smartphones. Nas propagandas e nas notícias personalizadas que abarrotam nossas redes sociais. E estão aprofundando o fosso da desigualdade social e colocando em risco as democracias.

Definitivamente, não é com entusiasmo que a americana Cathy O’Neil enxerga a revolução dos algoritmos, sistemas capazes de organizar uma quantidade cada vez mais impressionante de informações disponíveis na internet, o chamado Big Data.

Matemática com formação em Harvard e Massachussetts Institute of Technology (MIT), duas das mais prestigiadas universidades do mundo, ela abandonou em 2012 uma bem-sucedida carreira no mercado financeiro e na cena das startups de tecnologia para estudar o assunto a fundo.

Quatro anos depois, publicou o livro Weapons of Math Destruction (Armas de Destruição em Cálculos, em tradução livre, um trocadilho com a expressão “armas de destruição em massa” em inglês) e tornou-se uma das vozes mais respeitadas no país sobre os efeitos colaterais da economia do Big Data.

A obra é recheada de exemplos de modelos matemáticos atuais que ranqueiam o potencial de seres humanos como estudantes, trabalhadores, criminosos, eleitores e consumidores. Segundo a autora, por trás da aparente imparcialidade desses sistemas, escondem-se critérios nebulosos que agravam injustiças.

É o caso dos seguros de automóveis nos Estados Unidos. Motoristas que nunca tomaram uma multa sequer, mas que tinham restrições de crédito por morarem em bairros pobres, pagavam valores consideravelmente mais altos do que aqueles com facilidade de crédito, mas já condenados por dirigirem embriagados. “Para a seguradora, é um ganha-ganha. Um bom motorista com restrição de crédito representa um risco baixo e um retorno altíssimo”, exemplifica.

Confira abaixo os principais trechos da entrevista:

BBC Brasil – Há séculos pesquisadores analisam dados para entender padrões de comportamento e prever acontecimentos. Qual é novidade trazida pelo Big Data?

Cathy O’Neil – O diferencial do Big Data é a quantidade de dados disponíveis. Há uma montanha gigantesca de dados que se correlacionam e que podem ser garimpados para produzir a chamada “informação incidental”. É incidental no sentido de que uma determinada informação não é fornecida diretamente – é uma informação indireta. É por isso que as pessoas que analisam os dados do Twitter podem descobrir em qual político eu votaria. Ou descobrir se eu sou gay apenas pela análise dos posts que curto no Facebook, mesmo que eu não diga que sou gay.

Ambiente de trabalho automatizado‘Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política’, diz autora. GETTY IMAGES

A questão é que esse processo é cumulativo. Agora que é possível descobrir a orientação sexual de uma pessoa a partir de seu comportamento nas redes sociais, isso não vai ser “desaprendido”. Então, uma das coisas que mais me preocupam é que essas tecnologias só vão ficar melhores com o passar do tempo. Mesmo que as informações venham a ser limitadas – o que eu acho que não vai acontecer – esse acúmulo de conhecimento não vai se perder.

BBC Brasil – O principal alerta do seu livro é de que os algoritmos não são ferramentas neutras e objetivas. Pelo contrário: eles são enviesados pelas visões de mundo de seus programadores e, de forma geral, reforçam preconceitos e prejudicam os mais pobres. O sonho de que a internet pudesse tornar o mundo um lugar melhor acabou?

O’Neil – É verdade que a internet fez do mundo um lugar melhor em alguns contextos. Mas, se colocarmos numa balança os prós e os contras, o saldo é positivo? É difícil dizer. Depende de quem é a pessoa que vai responder. É evidente que há vários problemas. Só que muitos exemplos citados no meu livro, é importante ressaltar, não têm nada a ver com a internet. As prisões feitas pela polícia ou as avaliações de personalidade aplicadas em professores não têm a ver estritamente com a internet. Não há como evitar que isso seja feito, mesmo que as pessoas evitem usar a internet. Mas isso foi alimentado pela tecnologia de Big Data.

Por exemplo: os testes de personalidade em entrevistas de emprego. Antes, as pessoas se candidatavam a uma vaga indo até uma determinada loja que precisava de um funcionário. Mas hoje todo mundo se candidata pela internet. É isso que gera os testes de personalidade. Existe uma quantidade tão grande de pessoas se candidatando a vagas que é necessário haver algum filtro.

BBC Brasil – Qual é o futuro do trabalho sob os algoritmos?

O’Neil – Testes de personalidade e programas que filtram currículos são alguns exemplos de como os algoritmos estão afetando o mundo do trabalho. Isso sem mencionar os algoritmos que ficam vigiando as pessoas enquanto elas trabalham, como é o caso de professores e caminhoneiros. Há um avanço da vigilância. Se as coisas continuarem indo do jeito como estão, isso vai nos transformar em robôs.

Reprodução de propaganda no Facebook usada para influenciar as eleições nos EUAReprodução de propaganda no Facebook usada para influenciar as eleições nos EUA: ‘não deveriam ser permitidos anúncios personalizados, customizados’, opina autora

Mas eu não quero pensar nisso como um fato inevitável – que os algoritmos vão transformar as pessoas em robôs ou que os robôs vão substituir o trabalho dos seres humanos. Eu não quero admitir isso. Isso é algo que podemos decidir que não vai acontecer. É uma decisão política. Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política. O problema é que estamos tão intimidados pelo avanço dessas tecnologias que sentimos que não há como lutar contra.

BBC Brasil – E no caso das companhias de tecnologia como a Uber? Alguns estudiosos usam o termo “gig economy” (economia de “bicos”) para se referir à organização do trabalho feita por empresas que utilizam algoritmos.

O’Neil – Esse é um ótimo exemplo de como entregamos o poder a essas empresas da gig economy, como se fosse um processo inevitável. Certamente, elas estão se saindo muito bem na tarefa de burlar legislações trabalhistas, mas isso não quer dizer que elas deveriam ter permissão para agir dessa maneira. Essas companhias deveriam pagar melhores remunerações e garantir melhores condições de trabalho.

No entanto, os movimentos que representam os trabalhadores ainda não conseguiram assimilar as mudanças que estão ocorrendo. Mas essa não é uma questão essencialmente algorítmica. O que deveríamos estar perguntando é: como essas pessoas estão sendo tratadas? E, se elas não estão sendo bem tratadas, deveríamos criar leis para garantir isso.

Eu não estou dizendo que os algoritmos não têm nada a ver com isso – eles têm, sim. É uma forma que essas companhias usam para dizer que elas não podem ser consideradas “chefes” desses trabalhadores. A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos. Essa é uma questão em que venho trabalhando há algum tempo: que pessoas vão ser responsabilizadas pelos erros dos algoritmos?

BBC Brasil – No livro você argumenta que é possível criar algoritmos para o bem – o principal desafio é garantir transparência. Porém, o segredo do sucesso de muitas empresas é justamente manter em segredo o funcionamento dos algoritmos. Como resolver a contradição?

O’Neil – Eu não acho que seja necessária transparência para que um algoritmo seja bom. O que eu preciso saber é se ele funciona bem. Eu preciso de indicadores de que ele funciona bem, mas isso não quer dizer que eu necessite conhecer os códigos de programação desse algoritmo. Os indicadores podem ser de outro tipo – é mais uma questão de auditoria do que de abertura dos códigos.

A melhor maneira de resolver isso é fazer com que os algoritmos sejam auditados por terceiros. Não é recomendável confiar nas próprias empresas que criaram os algoritmos. Precisaria ser um terceiro, com legitimidade, para determinar se elas estão operando de maneira justa – a partir da definição de alguns critérios de justiça – e procedendo dentro da lei.

Cathy O'NeilPara Cathy O’Neil, polarização política e fake news só vão parar se “fecharmos o Facebook”. DIVULGAÇÃO

BBC Brasil – Recentemente, você escreveu um artigo para o jornal New York Times defendendo que a comunidade acadêmica participe mais dessa discussão. As universidades poderiam ser esse terceiro de que você está falando?

O’Neil – Sim, com certeza. Eu defendo que as universidades sejam o espaço para refletir sobre como construir confiabilidade, sobre como requerer informações para determinar se os algoritmos estão funcionando.

BBC Brasil – Quando vieram a público as revelações de Edward Snowden de que o governo americano espionava a vida das pessoas através da internet, muita gente não se surpreendeu. As pessoas parecem dispostas a abrir mão da sua privacidade em nome da eficiência da vida virtual?

O’Neil – Eu acho que só agora estamos percebendo quais são os verdadeiros custos dessa troca. Com dez anos de atraso, estamos percebendo que os serviços gratuitos na internet não são gratuitos de maneira alguma, porque nós fornecemos nossos dados pessoais. Há quem argumente que existe uma troca consentida de dados por serviços, mas ninguém faz essa troca de forma realmente consciente – nós fazemos isso sem prestar muita atenção. Além disso, nunca fica claro para nós o que realmente estamos perdendo.

Mas não é pelo fato de a NSA (sigla em inglês para a Agência de Segurança Nacional) nos espionar que estamos entendendo os custos dessa troca. Isso tem mais a ver com os empregos que nós arrumamos ou deixamos de arrumar. Ou com os benefícios de seguros e de cartões de crédito que nós conseguimos ou deixamos de conseguir. Mas eu gostaria que isso estivesse muito mais claro.

No nível individual ainda hoje, dez anos depois, as pessoas não se dão conta do que está acontecendo. Mas, como sociedade, estamos começando a entender que fomos enganados por essa troca. E vai ser necessário um tempo para saber como alterar os termos desse acordo.

Aplicativo do Uber‘A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos’, diz O’Neil. EPA

BBC Brasil – O último capítulo do seu livro fala sobre a vitória eleitoral de Donald Trump e avalia como as pesquisas de opinião e as redes sociais influenciaram na corrida à Casa Branca. No ano que vem, as eleições no Brasil devem ser as mais agitadas das últimas três décadas. Que conselho você daria aos brasileiros?

O’Neil – Meu Deus, isso é muito difícil! Está acontecendo em todas as partes do mundo. E eu não sei se isso vai parar, a não ser que fechem o Facebook – o que, a propósito, eu sugiro que façamos. Agora, falando sério: as campanhas políticas na internet devem ser permitidas, mas não deveriam ser permitidos anúncios personalizados, customizados – ou seja, todo mundo deveria receber os mesmos anúncios. Eu sei que essa ainda não é uma proposta realista, mas acho que deveríamos pensar grande porque esse problema é grande. E eu não consigo pensar em outra maneira de resolver essa questão.

É claro que isso seria um elemento de um conjunto maior de medidas porque nada vai impedir pessoas idiotas de acreditar no que elas querem acreditar – e de postar sobre isso. Ou seja, nem sempre é um problema do algoritmo. Às vezes, é um problema das pessoas mesmo. O fenômeno das fake news é um exemplo. Os algoritmos pioram a situação, personalizando as propagandas e amplificando o alcance, porém, mesmo que não existisse o algoritmo do Facebook e que as propagandas políticas fossem proibidas na internet, ainda haveria idiotas disseminando fake news que acabariam viralizando nas redes sociais. E eu não sei o que fazer a respeito disso, a não ser fechar as redes sociais.

Eu tenho três filhos, eles têm 17, 15 e 9 anos. Eles não usam redes sociais porque acham que são bobas e eles não acreditam em nada do que veem nas redes sociais. Na verdade, eles não acreditam em mais nada – o que também não é bom. Mas o lado positivo é que eles estão aprendendo a checar informações por conta própria. Então, eles são consumidores muito mais conscientes do que os da minha geração. Eu tenho 45 anos, a minha geração é a pior. As coisas que eu vi as pessoas da minha idade compartilhando após a eleição de Trump eram ridículas. Pessoas postando ideias sobre como colocar Hilary Clinton na presidência mesmo sabendo que Trump tinha vencido. Foi ridículo. A esperança é ter uma geração de pessoas mais espertas.