Arquivo da tag: Linguística

Generation Amazing!!! How We’re Draining Language of Its Power (Literary Hub)

lithub.com

Emily McCrary-Ruiz-Esparza on the “Maxim of Extravagance”

By Emily McCrary-Ruiz-Esparza

September 27, 2022


I noticed it recently when I scheduled my dog for a veterinarian’s appointment. The person who answered the phone was friendly enough and greeted me warmly, and then I made my request.

I’d like to make an appointment for my dog, I said. Wonderful, said the scheduler. June McCrary.  Excellent. She needs an anal gland expression. Fantastic!

I was surprised anyone could be so over the moon to empty my chihuahua’s anal glands—if you google the procedure I’m sure you will be as well—but in a way, grateful too.

When I shared this story with a friend, she told me about a conversation she overheard between two parents at the park. What are your children’s names? one of them said as they watched a pair of boys fight each other for one of those cold metal animals that bobs back and forth. The other responded but my friend didn’t catch the answer. The conversation went on and one side sounded something like this: Really? Amazing. That’s so beautiful. Just beautiful. How did you choose names like that?

Their names: Matthew and David. Fine names. But when you ooze words like amazing and beautiful, I imagine we’re dealing with something like Balthazaar and Tiberius.

We reach for over-the-top words for just about anything. These amazings and wonderfuls and incredibles and fantastics, we throw them around as we once did OKs and thank yous and I can help with thats.

Surreal is another favorite word since the spring of 2020. During the first quarantine, driving through the city in the only car on the road really did feel surreal, so did seeing every business closed, like maybe we were living in a Saramago novel. A grocery store full of masked shoppers circling each other at a wary distance of six feet wasn’t exactly surreal, but it was strange enough, so we used it there too.

Eventually we ran out of places to put the word, and by then we were tired, so driving on the road with other cars became surreal, seeing other people standing close to each other in the grocery store was surreal, not having to wear a mask was surreal. It became a way to describe change, or anything out of the ordinary.

What is it that makes us talk this way? That to express a modicum of emotion, we have to reach for words like fantastic, incredible, unbelievable, and unreal, words meant to convey a certain level of magnitude, but that no longer carry their original weight.The less potent our words are, the more we have to reach for particularly emotive ones to say what we want to say.

Martin Hilpert, who teaches linguistics at the Université de Neuchâtel in Switzerland, told me this is nothing new. “Words with evaluative meanings lose potency as speakers apply them to more and more situations. Toilet paper that is especially soft can be ‘fantastic,’ a train delayed by ten minutes can be ‘a disaster.’”

This occurs in a sort of cycle, which Martin Haspelmath, a comparative linguist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, describes in a handful of steps.

It happens like this: To attract attention, we submit to the “maxim of extravagance.” You really want people to see the taxidermied pig you just bought, so you tell your friend, “Man, this thing is incredible. It’s wearing a lederhosen and everything.” Your friend goes to see the pig and he too is surprised by the thing. He starts telling his friends, “that thing is incredible.” This is called “conformity.” Word gets around the neighborhood and then the whole block is talking about the incredible taxidermied pig. This is called “frequency.” You’re out for a walk one day, and you flag down a Door Dasher on a bicycle. “Have you seen the—” “The incredible taxidermied pig? Yeah man, whatever.” This is called “predictability.”

Predictability is useful when we want to fit in with the crowd, but it’s not useful if we want to attract attention, which you need at this point, because you’ve started charging admission to see the pig. Now you need to innovate, and you’re back to the maxim of extravagance again, so the pig becomes unbelievable.

A pop-linguistic term for this is “semantic bleaching,” like staining all the color out of our words, and it happens with overuse. Another way to describe it is supply and demand. When we use a word too much and there are too many excellents and beautifuls floating around, each becomes less valuable.

Bleaching has a circular relationship with hyperbole. The less potent our words are, the more we have to reach for particularly emotive ones to say what we want to say, and we climb a crowded ladder to a place where all words are wispy and white and no one is really saying anything at all. That’s how anal gland expressions become fantastic and ordinary names like David and Matthew become amazing.

Writers and thinkers have many times over made the case that stale language is both a symptom and cause of the deterioration of critical thought. George Orwell, famously, for one. He writes in “Politics and the English Language” that a speaker who uses tired language has “gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself.”

There is a certain point when turns of phrase are so out of fashion they become fresh again. Orwell’s dying metaphors of the 1940s were take up the cudgel for and ring the changes on, which would feel interesting now. Ours are full-throated and deep dive and unpack and dig in and at the end of the day.

I contacted several academics for the writing of this essay and asked them whether the new abundance of communication accelerates the exhaustion of words. They insisted that there isn’t more communication going on now than in the past, it’s just more visible. ­If we’re talking this much, it might be that we’re desperate to exist. If we’re slinging around words like amazing and incredible and surreal, it might be that we’re looking for these things.

I don’t believe this is true. The overwhelming quantity of means we have for talking to each other, and the fact that we’re using them, tells me there is more communication. There are some friends I talk to daily because we share a text thread. I wouldn’t be calling all five of them every day otherwise. I can watch two people berate each other in the comments section of a Washington Post article about soup, two people that, thirty years ago, would never get the chance to come to blows over curry.

Language is adapted and spread through exposure, so of course change is accelerating. In the same way clothes fall in and out of fashion at shorter intervals now, because of social media and all our instant global connectedness, so do our words.

The fields of linguistics, anthropology, and English are full of hyperbole stans who go to great lengths to make the case for its value and importance. They call it “the master trope,” “the trope of tropes,” “a generator of thought and meaning,” “a tool of philosophical and religious inquiry,” “ an act of becoming,” and “a propelling toward transcendence from an eminent exigency.”

In a paper titled “Recovering Hyperbole: Rethinking the Limits of Rhetoric for an Age of Excess,” the scholar Joshua R. Ritter argues the prescience of hyperbole. For Ritter, hyperbole reflects an innate desire for understanding. He calls it “one of the most effective ways of trying to express the often confounding and inexpressible positions that characterize the litigious discussions of impossibility.”

Ritter also cites Saint Anselm of Canterbury, who believed that the way humans describe God is the archetypal example of hyperbole—it’s everything that cannot be understood, but we do our best to understand anyway.

“It dramatically holds the real and the ideal in irresolvable tension and reveals the impossible distance between the ineptitude and the infinite multiplicity of language to describe what is indescribable,” Ritter writes.

We may be often confounded, but we are hardly ever without something to say. The internet, the great proliferator of communication, incentivizes no one to be speechless. If you’re not talking, you’re not there, so the more frequently you speak, the more real you are. Stop talking and you disappear.

If we’re talking this much, it might be that we’re desperate to exist. If we’re slinging around words like amazing and incredible and surreal, it might be that we’re looking for these things. If we are Generation Hyperbole, it is because we are so desperate to feel something good and tremendous—we’re constantly reaching for something beyond. We want to feel awed, we want to be in touch with something dreamlike, we want to see things that are really beautiful, we’ve only forgotten where to find them. But we’re looking for meaning, you can see it in our language. Even Orwell believed “that the decadence of our language is probably curable.”

Global connectedness means we’re witness to terrible things on a terrible scale, and we share an inadequate language to understand it. We need to feel, even if that feeling is pain, and we need to know that we’re not alone in the feeling. If tragedy is now commonplace, why can’t truly excellent things, amazing things, fantastic things too become commonplace?

Ritter writes:

Once a perplexing and sometimes disturbing disorienting perception occurs, this vertige de l’hyperbole as Baudelaire refers to it, one is ready for a perspectival reorientation—a paradoxical movement leading toward insight and partial apprehension. By generating confusion through excess, hyperbole alters and creates meaning.

Thousands of Chimp Vocal Recordings Reveal a Hidden Language We Never Knew About (Science Alert)

sciencealert.com

PETER DOCKRILL

24 MAY 2022


A common chimpanzee vocalizing. (Andyworks/Getty Images)

We humans like to think our mastery of language sets us apart from the communication abilities of other animals, but an eye-opening new analysis of chimpanzees might force a rethink on just how unique our powers of speech really are.

In a new study, researchers analyzed almost 5,000 recordings of wild adult chimpanzee calls in Taï National Park in Côte d’Ivoire (aka Ivory Coast).

When they examined the structure of the calls captured on the recordings, they were surprised to find 390 unique vocal sequences – much like different kinds of sentences, assembled from combinations of different call types.

Compared to the virtually endless possibilities of human sentence construction, 390 distinct sequences might not sound overly verbose.

Yet, until now, nobody really knew that non-human primates had so many different things to say to each other – because we’ve never quantified their communication capabilities to such a thorough extent.

“Our findings highlight a vocal communication system in chimpanzees that is much more complex and structured than previously thought,” says animal researcher Tatiana Bortolato from the Max Planck Institute for Evolutionary Anthropology in Germany.

In the study, the researchers wanted to measure how chimpanzees combine single-use calls into sequences, order those calls within the sequences, and recombine independent sequences into even longer sequences.

While call combinations of chimpanzees have been studied before, until now the sequences that make up their whole vocal repertoire had never been subjected to a broad quantitative analysis.

To rectify this, the team captured 900 hours of vocal recordings made by 46 wild mature western chimpanzees (Pan troglodytes verus), belonging to three different chimp communities in Taï National Park.

In analyzing the vocalizations, the researchers identified how vocal calls could be uttered singularly, combined in two-unit sequences (bigrams), or three-unit sequences (trigrams). They also mapped networks of how these utterances were combined, as well as examining how different kinds of frequent vocalizations were ordered and recombined (for example, bigrams within trigrams).

In total, 12 different call types were identified (including grunts, pants, hoos, barks, screams, and whimpers, among others), which appeared to mean different things, depending on how they were used, but also upon the context in which the communication took place.

“Single grunts, for example, are predominantly emitted at food, whereas panted grunts are predominantly emitted as a submissive greeting vocalization,” the researchers explain in their paper, led by co-first authors Cédric Girard-Buttoz and Emiliano Zaccarella.

“Single hoos are emitted to threats, but panted hoos are used in inter-party communication.”

In total, the researchers found these different kinds of calls could be combined in various ways to make up 390 different kinds of sequences, which they say may actually be an underestimation, given new vocalization sequences were still being found as the researchers hit their limit of field recordings.

Even so, the data so far suggest chimpanzee communication is much more complex than we realized, which has implications for the sophistication of meanings generated in their utterances (as well as giving new clues into the origins of human language).

“The chimpanzee vocal system, consisting of 12 call types used flexibly as single units, or within bigrams, trigrams or longer sequences, offers the potential to encode hundreds of different meanings,” the researchers write.

“Whilst this possibility is substantially less than the infinite number of different meanings that can be generated by human language, it nonetheless offers a structure that goes beyond that traditionally considered likely in primate systems.”

The next step, the team says, will be to record even larger datasets of chimpanzee calls, to try to assess just how the diversity and ordering of uttered sequences relates to versatile meaning generation, which wasn’t considered in this study.

There’s lots more to be said, in other words – by both chimpanzees and scientists alike.

“This is the first study in a larger project,” explains senior author Catherine Crockford, a director of research at the Institute for Cognitive Science at CNRS, in France.

“By studying the rich complexity of the vocal sequences of wild chimpanzees, a socially complex species like humans, we expect to bring fresh insight into understanding where we come from and how our unique language evolved.”

The findings are reported in Communications Biology.

Como a genética está reconstruindo a fascinante jornada dos primeiros humanos à América (BBC)

News Brasil

Artigo original

Lucía Blasco, BBC News Mundo – 20 de janeiro de 2022

Montagem de um crânio de homo sapiens com ilustrações de cientistas ao seu redor.

América. O último continente a ser povoado pelo ser humano. Uma parte do planeta Terra desconhecida do Homo sapiens por milhares de anos.

Até que uma mudança climática — entre muitas outras coisas — permitiu ao inquieto primata pisar naquela região.

Mas como a América foi povoada?

“É uma pergunta vital que ainda não resolvemos e continuamos fazendo porque pulsa em nossa curiosidade humana”, diz à BBC News Mundo, serviço de notícias em espanhol da BBC, Lawrence C. Brody, diretor do departamento de Genômica e Sociedade do Instituto Nacional de Pesquisa do Genoma Humano (NHGRI, na sigla em inglês), nos Estados Unidos.

“Os humanos anatomicamente modernos deixaram a África há pelo menos 100 mil anos e começaram a se espalhar. E em algum momento depois de 40 mil anos, os humanos desenvolveram a tecnologia necessária para começar a explorar mais ao norte”, acrescenta Víctor Moreno, pesquisador de pós-doutorado do Centro de Geogenética da Universidade de Copenhague, na Dinamarca, à BBC News Mundo.

Há várias teorias, mas a corrente dominante atual sustenta que houve uma única migração primeiro para a Ásia, depois para a Australásia e, mais tarde, para a Europa.

A América ainda estava muito longe e, sobretudo, bastante isolada.

Infográfico do mapa-múndi e das datas em que o Homo Sapiens saiu da África para se espalhar pelo mundo

Os estudos sobre o DNA foram fundamentais para mapear estas migrações ancestrais.

“Nosso DNA contém um arquivo enorme da história de nossos ancestrais. Um genoma pode representar a história de muitas pessoas diferentes de uma população inteira”, afirmou à BBC News Mundo a antropóloga e geneticista americana Jennifer Raff, especialista no povoamento inicial do continente americano.

Para aprender sobre a árvore genealógica de nossos ancestrais, os cientistas sequenciam o DNA humano que ainda pode ser encontrado em fósseis e esqueletos muito antigos, razão pela qual é chamado de “DNA antigo”.

DNA antigo

As tecnologias modernas de sequenciamento tornaram possível ter acesso a fragmentos de DNA sem ter que sequenciar um genoma inteiro.

“Os antropólogos tiram conclusões gerais a partir de amostras muito, muito pequenas de DNA antigo, como dentes ou fragmentos de ossos e, mais recentemente, argila e areia. Os algoritmos nos ajudam a interpretar os dados e saber se aquele DNA está contaminado”, explicou o geneticista humano Brody.

Infográfico sobre onde se pode encontrar DNA antigo

Isso deu a eles algumas respostas sobre o povoamento da América.

“Por exemplo, descobrimos que várias populações ancestrais contribuíram para a ascendência dos povos indígenas americanos, e não apenas uma como se acreditava anteriormente”, diz Raff.

“Graças a isso, agora sabemos que o cenário do povoamento da América foi muito mais complexo do que se pensava, mas também muito mais interessante.”

Para embarcar nesta jornada fascinante, devemos começar situando-nos há aproximadamente 25 mil anos na linha do tempo.

A última era do gelo

Estamos no período do Último Máximo Glacial (LGM, na sigla em inglês), a última era do gelo conhecida na história da Terra.

“O mapa-múndi era muito diferente do atual. A maior parte da América do Norte estava coberta por uma espessa camada de gelo que tornava a região inabitável”, diz Acuña-Alonzo, antropólogo geneticista da Escola Nacional de Antropologia e História (ENAH) do México.

GIF animado de como se criou um corredor de gelo entre 19.000 e 12.500 a.C.

“As condições eram bastante difíceis. Muitos lugares eram inacessíveis e cobertos de gelo. Fazia muito frio, os humanos tinham que caçar e coletar… e não sabiam quando poderia aparecer o próximo mamute!”, acrescenta o pesquisador Víctor Moreno.

Com o avanço do período glacial, o nível dos mares do mundo foi baixando, à medida que a água era armazenada nas camadas de gelo que cobriam os continentes.

“Toda a água estava sequestrada nas geleiras”, explica Moreno.

Por causa disso, havia duas grandes geleiras que cobriam quase todo o Canadá e tornavam praticamente impossível ir para o sul.

Mas no final desse período glacial, há cerca de 12 mil anos, as camadas de gelo começaram a derreter e surgiram alguns refúgios glaciares.

“Nesses locais, as condições não eram tão terríveis e ainda eram produtivas em termos de recursos para que os humanos pudessem se alimentar”, diz Moreno.

Um desses refúgios foi a Beríngia: uma ponte de terra que emergiu do mar congelado por meio da qual as primeiras populações de humanos entraram na América, segundo acredita a maioria dos pesquisadores.

Ela se estendia do que conhecemos hoje como o Alasca até a Eurásia — e era um território seco, cheio de vegetação e fauna.

Mapa de como era a ponte terrestre de Bering

Atualmente, está submersa — por isso não é possível encontrar vestígios arqueológicos —, mas há um consenso de que os ancestrais dos indígenas americanos saíram da Sibéria em direção ao Alasca por aquele trecho de terra e ficaram isolados na Beríngia durante algum tempo.

“À medida que as péssimas condições do Último Máximo Glacial melhoravam, foram abertas certas rotas — pelo litoral e pelo interior — que teriam permitido a entrada na América a partir da região da ponte terrestre de Bering”, diz Víctor Moreno.

Mas ainda há dúvidas sobre a rota que seguiram para entrar na América, sobre quantos grupos (ou quais grupos) fizeram este caminho e quando isso aconteceu.

Quando chegaram à América?

Há duas teorias sobre quando os primeiros seres humanos chegaram à América.

As duas principais correntes são a teoria do povoamento precoce (que diz que isso ocorreu há cerca de 30 mil ou 25 mil anos) e a teoria do povoamento tardio (segundo a qual isso teria acontecido há cerca de 12 mil ou 14 mil anos).

Por muito tempo, se pensou que o povoamento foi tardio. Esta hipótese também é conhecida como “teoria clássica do povoamento da América” ou “modelo Clóvis”.

Os Clóvis, considerados em meados do século 20 a cultura indígena mais antiga da América, usavam uma técnica de entalhe de pedra bastante aprimorada para caçar a fauna gigante que existia na Idade do Gelo com ferramentas que hoje conhecemos como “pontas de clóvis”.

Fotografia de uma 'ponta de clovis'

Fonte: Getty

Durante décadas, essas “pontas de clóvis” foram encontradas em sítios arqueológicos de cerca de 13 mil anos, espalhados por várias partes da América do Norte. Por isso, se pensava que eles foram os primeiros povoadores da América.

Mas, nos últimos anos, vários estudos genéticos refutaram essa ideia.

Embora não haja consenso, hoje há mais cientistas e arqueólogos que argumentam que a ocupação da América ocorreu muito antes do que se acreditava.

“A maioria dos cientistas e arqueólogos apoia a teoria do povoamento precoce, e não tardia, mas os pesquisadores não chegam a um acordo sobre uma data específica ou sobre que sítios arqueológicos são ‘autênticos'”, diz à BBC News Mundo Jennifer Raff.

A análise genética de populações contemporâneas e antigas foi fundamental para que a teoria do povoamento precoce ganhasse peso.

No entanto, ainda há pesquisadores — principalmente arqueólogos — que continuam a defender a teoria do povoamento tardio.

“Alguns arqueólogos são céticos a respeito dos primeiros sítios arqueológicos encontrados, sobretudo porque não aceitam os métodos de datação, as associações com a atividade humana e a estratigrafia (análise dos estratos arqueológicos) que foram reportados”, explica Acuña-Alonzo.

“A verdade é que demonstrar a antiguidade da presença humana é bastante complicado e difícil, por isso só sítios arqueológicos muito bem escavados e documentados servirão para ir mudando essas posições”, acrescenta o pesquisador.

Também segue aberto o debate sobre como os primeiros seres humanos entraram no continente depois que deixaram a ponte terrestre de Bering, ou Beríngia, mas os cientistas trabalham principalmente com duas possibilidades: uma rota marítima ou uma rota terrestre.

Teoria da via marítima

A hipótese da rota marítima está ligada à teoria do povoamento precoce e tem sido respaldada por estudos arqueológicos, linguísticos e genéticos relativamente recentes.

Segundo essa teoria dominante, os primeiros humanos teriam entrado na América margeando a costa do Pacífico, já que naquela época tão fria “o nível do mar era mais baixo, e as costas muito mais amplas. Eles não teriam conseguido atravessar grandes distâncias nem correntes marítimas que não os favorecessem”, explica o antropólogo Acuña-Alonzo.

Não sabemos a data exata, pode ser há cerca de 17 mil anos ou até mesmo 20 mil ou 30 mil anos.

Teoria da rota terrestre

Mais uma vez, não há consenso, embora menos cientistas digam que a rota foi feita por terra há cerca de 13 mil anos, coincidindo com a teoria do povoamento tardio.

“Os pesquisadores que defendem esse modelo acreditam que os primeiros humanos a chegar à América fizeram isso muito depois do Último Máximo Glacial, viajando por um corredor livre de gelo que abriu caminho nas Montanhas Rochosas canadenses enquanto as geleiras recuavam”, explica Raff.

Segundo essa teoria, os humanos teriam atravessado essa “passagem” entre as geleiras pelo interior da América do Norte e, posteriormente, se espalhado pela América do Sul.

Mas, o estudo de genomas antigos e contemporâneos, a descoberta de sítios arqueológicos pré-Clóvis e alguns estudos ambientais questionam essa teoria, por isso há mais cientistas que defendem que a travessia foi feita pelo mar.

Estas pegadas pertencem a crianças e adolescentes que viveram há pelo menos 21 mil anos.

Estas pegadas pertencem a crianças e adolescentes que viveram há pelo menos 21 mil anos. Fonte: Bournemouth University, Reino Unido

Um dos achados mais recentes foi a descoberta em setembro de 2021 de pegadas humanas em um lago do Novo México, nos Estados Unidos, com mais de 20 mil anos.

Essas pegadas sugerem que os primeiros humanos chegaram à América no auge da Última Era do Gelo e que pode ter havido grandes migrações sobre as quais ainda não sabemos muito.

A miscigenação

Mal sabemos que aparência tinham os primeiros seres humanos que chegaram à América.

Para tentar descobrir quem eram, recorremos novamente à genética.

Graças a ela sabemos que os ancestrais dos primeiros americanos se separaram de seus “primos asiáticos” quando entraram na ponte terrestre de Bering, e que se misturaram muito mais do que se supunha, sobretudo durante os últimos 10 mil anos.

Os geneticistas acreditam que houve uma miscigenação entre duas populações ancestrais humanas: os antigos paleo-siberianos e os antigos asiáticos do leste, segundo Acuña-Alonzo.

Infográfico mostrando a miscigenação ocorrida na Beríngia

Raff diz que um desses grupos habitava o que hoje é o Sudeste Asiático. Acredita-se que esse grupo tenha contribuído majoritariamente para a ancestralidade dos primeiros seres humanos que povoaram o continente americano — especificamente, cerca de 60%, indica Víctor Moreno.

O outro ramo ancestral surgiu há cerca de 39 mil anos no que hoje é o nordeste da Sibéria.

Esses dois grupos convergiram há cerca de 25 mil a 20 mil anos atrás.

Não sabemos exatamente como isso aconteceu, mas aconteceu durante uma migração da Sibéria”

diz Raff.

“Temos muito pouca ideia. O mais provável é que tenha ocorrido em algum lugar da Sibéria, mas quão perto da ponte terrestre de Bering isso aconteceu? Quão ao norte ou quão ao sul? Isso é algo que está sendo debatido porque o apoio genético, arqueológico e antropológico que temos é escasso”, diz Víctor Moreno.

O que a genética explica é o que aconteceu a seguir: houve uma série de eventos demográficos complexos e a população, novamente, se dividiu em duas.

Um ramo, o dos antigos beríngios (por sua possível conexão com a Beríngia) não teve descendentes conhecidos. O outro, dos antigos nativos americanos, sim.

Os cientistas chegaram a essas conclusões após descobrir uma afinidade genética muito grande entre grupos ancestrais da Sibéria e populações da Eurásia Oriental.

Pesquisador analisando pegadas de mais de 20 mil anos atrás encontradas nas margens de um lago no Novo México.

Pesquisador analisando pegadas de mais de 20 mil anos atrás encontradas nas margens de um lago no Novo México. Fonte: Universidade de Bournemouth, Reino Unido

“Sabemos, por exemplo, que os indígenas americanos estão relacionados geneticamente às populações do nordeste da Ásia por uma série de genes que permitiram a seus ancestrais economizar energia em condições climáticas muito difíceis”, acrescenta o geneticista.

Apesar dessas descobertas, eles ainda estão tentando determinar quantos povos antigos e atuais na América têm uma conexão com a linhagem genética desses antigos nativos americanos.

“Temos que aceitar que há muitas arestas dessa pergunta para as quais ainda não temos uma resposta”, diz Raff.

Na verdade, a última descoberta no Novo México deixa outra grande incógnita no ar: a possibilidade de que as primeiras populações tenham se extinguido sem deixar descendentes, sendo “substituídas” por outros povoadores quando o corredor de gelo foi formado.

Mas ainda não se sabe se foi esse o caso ou como teria acontecido.

“Não temos escolha a não ser abraçar a incerteza. Mas, ao mesmo tempo, é emocionante saber que estamos cada vez mais perto de reconstruir essa primeira viagem à América.”

Enquanto isso, os cientistas esperam que a herança genética nos dê mais respostas sobre a última grande expansão do Homo sapiens no planeta.


Créditos

Pesquisa e reportagem: Lucía Blasco
Design e infografia: Cecilia Tombesi
Mapa utilizado como base: Ron Blakey, NAU – NSF
Programação: Zoë Thomas, Adam Allen e Marcos Gurgel
Edição: Carol Olona e Ricardo Acampora
Com a colaboração de Hilda Badenes e Sally Morales
Projeto liderado por Carol Olona

‘Mind blowing’: Grizzly bear DNA maps onto Indigenous language families (Science)

sciencemag.org

By Rachel FrittsAug. 13, 2021 , 1:25 PM 5-7 minutes


Grizzly bears in the central coastal region of British Columbia. Michelle Valberg

The bears and Indigenous humans of coastal British Columbia have more in common than meets the eye. The two have lived side by side for millennia in this densely forested region on the west coast of Canada. But it’s the DNA that really stands out: A new analysis has found that the grizzlies here form three distinct genetic groups, and these groups align closely with the region’s three Indigenous language families.

It’s a “mind-blowing” finding that shows how cultural and biological diversity in the region are intertwined, says Jesse Popp, an Indigenous environmental scientist at the University of Guelph who was not involved with the work.

The research began purely as a genetics study. Grizzlies had recently begun to colonize islands along the coast of British Columbia, and scientists and Indigenous wildlife managers wanted to know why they were making this unprecedented move. Luckily, in 2011, the region’s five First Nations set up a collaborative “bear working group” to answer exactly that sort of question. Lauren Henson, a conservation scientist with the Raincoast Conservation Foundation, partnered with working group members from the Nuxalk, Haíɫzaqv, Kitasoo/Xai’xais, Gitga’at, and Wuikinuxv Nations to figure out which mainland grizzlies were most genetically similar to the island ones.

Henson used bear hair samples that researchers involved with the working group had collected over the course of 11 years. To get the samples, the team went to remote areas of British Columbia—some of them only accessible via helicopter—and piled up leaves and sticks, covering them with a concoction of dogfish oil or a fish-based slurry. It “smells really, really terrible to us, but is intriguing to bears,” Henson says.

The researchers then surrounded this tempting pile with a square of barbed wire, which harmlessly snagged tufts of fur—and the DNA it contains—when bears came to check out the smell. In all, the group collected samples from 147 bears over about 23,500 square kilometers—an area roughly the size of Vermont.

Henson and her colleagues then used microsatellite DNA markers—regions of the genome that change frequently compared with other sections—to determine how related the bears were to each other. The scientists found three distinct genetic groups of bears living in the study area, they report this month in Ecology and Society.

DNA analysis reveals three distinct genetic groups of grizzly bears, which align with the boundaries between Indigenous language families (gray lines). L. H. Henson et. al. Ecology and Society, 26(3): 7, 2021

But they could not find any obvious physical barriers keeping them apart. The boundaries between genetic groupings didn’t correspond to the location of waterways or especially rugged or snow-covered landscapes. It’s possible, Henson says, that the bears remain genetically distinct not because they can’t travel, but because the region is so resource-rich that they haven’t needed to do so to meet their needs.

One thing did correlate with the bears’ distribution, however: Indigenous language families. “We were looking at language maps and noticed the striking visual similarity,” Henson says. When the researchers analyzed the genetic interrelatedness of bears both within and outside the area’s three language families, they found that grizzly bears living within a language family’s boundaries were much more genetically similar to one another than to bears living outside them.

The findings don’t surprise Jenn Walkus, a Wuikinuxv scientist who co-authored the study. Growing up in a remote community called Rivers Inlet, she saw firsthand that humans and bears have a lot of the same needs in terms of space, food, and other resources. It would make sense, she says, for them to settle in the same areas—ones with a steady supply of salmon, for instance. This historic interrelatedness means Canada should manage key resources with both bears and people in mind, she says. The Wuikinuxv Nation, for example, is looking into reducing its annual salmon harvest to support the bears’ needs, she notes.

Lauren Eckert, a conservation scientist at the University of Victoria who was not involved with the study, agrees that the findings could have important implications for managing the area’s bears. It’s “fascinating” and “really shocking” work, she says. The resources that shaped grizzly bear distribution in the region clearly also shaped humans, Eckert says, “which I think reinforces the idea that local knowledge and localized management are really critical.”

doi:10.1126/science.abl9306

Words Have Lost Their Common Meaning (The Atlantic)

theatlantic.com

John McWhorter, contributing writer at The Atlantic and professor at Columbia University

March 31, 2021


The word racism, among others, has become maddeningly confusing in current usage.

An illustration of quotation marks and the United States split in two.
Adam Maida / The Atlantic

Has American society ever been in less basic agreement on what so many important words actually mean? Terms we use daily mean such different things to different people that communication is often blunted considerably, and sometimes even thwarted entirely. The gap between how the initiated express their ideological beliefs and how everyone else does seems larger than ever.

The word racism has become almost maddeningly confusing in current usage. It tempts a linguist such as me to contravene the dictum that trying to influence the course of language change is futile.

Racism began as a reference to personal prejudice, but in the 1960s was extended via metaphor to society, the idea being that a society riven with disparities according to race was itself a racist one. This convention, implying that something as abstract as a society can be racist, has always felt tricky, best communicated in sociology classes or careful discussions.

To be sure, the idea that disparities between white and Black people are due to injustices against Black people—either racist sentiment or large-scale results of racist neglect—seems as plain as day to some, especially in academia. However, after 50 years, this usage of racism has yet to stop occasioning controversy; witness the outcry when Merriam-Webster recently altered its definition of the word to acknowledge the “systemic” aspect. This controversy endures for two reasons.

First, the idea that all racial disparities are due to injustice may imply that mere cultural differences do not exist. The rarity of the Black oboist may be due simply to Black Americans not having much interest in the oboe—hardly a character flaw or evidence of some inadequacy—as opposed to subtly racist attitudes among music teachers or even the thinness of musical education in public schools. Second, the concept of systemic racism elides or downplays that disparities can also persist because of racism in the past, no longer in operation and thus difficult to “address.”

Two real-world examples of strained usage come to mind. Opponents of the modern filibuster have taken to calling it “racist” because it has been used for racist ends. This implies a kind of contamination, a rather unsophisticated perspective given that this “racist” practice has been readily supported by noted non-racists such as Barack Obama (before he changed his mind on the matter). Similar is the idea that standardized tests are “racist” because Black kids often don’t do as well on them as white kids. If the tests’ content is biased toward knowledge that white kids are more likely to have, that complaint may be justified. Otherwise, factors beyond the tests themselves, such as literacy in the home, whether children are tested throughout childhood, how plugged in their parents are to test-prep opportunities, and subtle attitudes toward school and the printed page, likely explain why some groups might be less prepared to excel at them.

Dictionaries are correct to incorporate the societal usage of racism, because it is now common coin. The lexicographer describes rather than prescribes. However, its enshrinement in dictionaries leaves its unwieldiness intact, just as a pretty map can include a road full of potholes that suddenly becomes one-way at a dangerous curve. Nearly every designation of someone or something as “racist” in modern America raises legitimate questions, and leaves so many legions of people confused or irritated that no one can responsibly dismiss all of this confusion and irritation as mere, well, racism.

To speak English is to know the difference between pairs of words that might as well be the same one: entrance and entry. Awesome and awful are similar. However, one might easily feel less confident about the difference between equality and equity, in the way that today’s crusaders use the word in diversity, equity, and inclusion.

In this usage, equity is not a mere alternate word for equality, but harbors an assumption: that where the races are not represented roughly according to their presence in the population, the reason must be a manifestation of (societal) racism. A teachers’ conference in Washington State last year included a presentation underlining: “If you conclude that outcomes differences by demographic subgroup are a result of anything other than a broken system, that is, by definition, bigotry.” A DEI facilitator specifies that “equity is not an outcome”—in the way equality is—but “a process that begins by acknowledging [people’s] unequal starting place and makes a commitment to correct and address the imbalance.”

Equality is a state, an outcome—but equity, a word that sounds just like it and has a closely related meaning, is a commitment and effort, designed to create equality. That is a nuance of a kind usually encountered in graduate seminars about the precise definitions of concepts such as freedom. It will throw or even turn off those disinclined to attend that closely: Fondness for exegesis will forever be thinly distributed among humans.

Many will thus feel that the society around them has enough “equalness”—i.e., what equity sounds like—such that what they may see as attempts to force more of it via set-aside policies will seem draconian rather than just. The subtle difference between equality and equity will always require flagging, which will only ever be so effective.

The nature of how words change, compounded by the effects of our social-media bubbles, means that many vocal people on the left now use social justice as a stand-in for justice—in the same way we say advance planning instead of planning or 12 midnight instead of midnight—as if the social part were a mere redundant, rhetorical decoration upon the keystone notion of justice. An advocacy group for wellness and nutrition titled one of its messages “In the name of social justice, food security and human dignity,” but within the text refers simply to “justice” and “injustice,” without the social prefix, as if social justice is simply justice incarnate. The World Social Justice Day project includes more tersely named efforts such as “Task Force on Justice” and “Justice for All.” Baked into this is a tacit conflation of social justice with justice conceived more broadly.

However, this usage of the term social justice is typically based on a very particular set of commitments especially influential in this moment: that all white people must view society as founded upon racist discrimination, such that all white people are complicit in white supremacy, requiring the forcing through of equity in suspension of usual standards of qualification or sometimes even logic (math is racist). A view of justice this peculiar, specific, and even revolutionary is an implausible substitute for millennia of discussion about the nature of the good, much less its apotheosis.

What to do? I suggest—albeit with little hope—that the terms social justice and equity be used, or at least heard, as the proposals that they are. Otherwise, Americans are in for decades of non-conversations based on greatly different visions of what justice and equ(al)ity are.

I suspect that the way the term racism is used is too entrenched to yield to anyone’s preferences. However, if I could wave a magic wand, Americans would go back to using racism to refer to personal sentiment, while we would phase out so hopelessly confusing a term as societal racism.

I would replace it with societal disparities, with a slot open afterward for according to race, or according to immigration status, or what have you. Inevitably, the sole term societal disparities would conventionalize as referring to race-related disparities. However, even this would avoid the endless distractions caused by using the same term—racism—for both prejudice and faceless, albeit pernicious, inequities.

My proposals qualify, indeed, as modest. I suspect that certain people will continue to use social justice as if they have figured out a concept that proved elusive from Plato through Kant through Rawls. Equity will continue to be refracted through that impression. Legions will still either struggle to process racism both harbored by persons and instantiated by a society, or just quietly accept the conflation to avoid making waves.

What all of this will mean is a debate about race in which our problem-solving is hindered by the fact that we too often lack a common language for discussing the topic.

John McWhorter is a contributing writer at The Atlantic. He teaches linguistics at Columbia University, hosts the podcast Lexicon Valley, and is the author of the upcoming Nine Nasty Words: English in the Gutter Then, Now and Always.

Language is learned in brain circuits that predate humans (Georgetown University)

PUBLIC RELEASE: 

GEORGETOWN UNIVERSITY MEDICAL CENTER

WASHINGTON — It has often been claimed that humans learn language using brain components that are specifically dedicated to this purpose. Now, new evidence strongly suggests that language is in fact learned in brain systems that are also used for many other purposes and even pre-existed humans, say researchers in PNAS (Early Edition online Jan. 29).

The research combines results from multiple studies involving a total of 665 participants. It shows that children learn their native language and adults learn foreign languages in evolutionarily ancient brain circuits that also are used for tasks as diverse as remembering a shopping list and learning to drive.

“Our conclusion that language is learned in such ancient general-purpose systems contrasts with the long-standing theory that language depends on innately-specified language modules found only in humans,” says the study’s senior investigator, Michael T. Ullman, PhD, professor of neuroscience at Georgetown University School of Medicine.

“These brain systems are also found in animals – for example, rats use them when they learn to navigate a maze,” says co-author Phillip Hamrick, PhD, of Kent State University. “Whatever changes these systems might have undergone to support language, the fact that they play an important role in this critical human ability is quite remarkable.”

The study has important implications not only for understanding the biology and evolution of language and how it is learned, but also for how language learning can be improved, both for people learning a foreign language and for those with language disorders such as autism, dyslexia, or aphasia (language problems caused by brain damage such as stroke).

The research statistically synthesized findings from 16 studies that examined language learning in two well-studied brain systems: declarative and procedural memory.

The results showed that how good we are at remembering the words of a language correlates with how good we are at learning in declarative memory, which we use to memorize shopping lists or to remember the bus driver’s face or what we ate for dinner last night.

Grammar abilities, which allow us to combine words into sentences according to the rules of a language, showed a different pattern. The grammar abilities of children acquiring their native language correlated most strongly with learning in procedural memory, which we use to learn tasks such as driving, riding a bicycle, or playing a musical instrument. In adults learning a foreign language, however, grammar correlated with declarative memory at earlier stages of language learning, but with procedural memory at later stages.

The correlations were large, and were found consistently across languages (e.g., English, French, Finnish, and Japanese) and tasks (e.g., reading, listening, and speaking tasks), suggesting that the links between language and the brain systems are robust and reliable.

The findings have broad research, educational, and clinical implications, says co-author Jarrad Lum, PhD, of Deakin University in Australia.

“Researchers still know very little about the genetic and biological bases of language learning, and the new findings may lead to advances in these areas,” says Ullman. “We know much more about the genetics and biology of the brain systems than about these same aspects of language learning. Since our results suggest that language learning depends on the brain systems, the genetics, biology, and learning mechanisms of these systems may very well also hold for language.”

For example, though researchers know little about which genes underlie language, numerous genes playing particular roles in the two brain systems have been identified. The findings from this new study suggest that these genes may also play similar roles in language. Along the same lines, the evolution of these brain systems, and how they came to underlie language, should shed light on the evolution of language.

Additionally, the findings may lead to approaches that could improve foreign language learning and language problems in disorders, Ullman says.

For example, various pharmacological agents (e.g., the drug memantine) and behavioral strategies (e.g., spacing out the presentation of information) have been shown to enhance learning or retention of information in the brain systems, he says. These approaches may thus also be used to facilitate language learning, including in disorders such as aphasia, dyslexia, and autism.

“We hope and believe that this study will lead to exciting advances in our understanding of language, and in how both second language learning and language problems can be improved,” Ullman concludes.

What happens to language as populations grow? It simplifies, say researchers (Cornell)

PUBLIC RELEASE: 

CORNELL UNIVERSITY

ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.

Their study was published in the Proceedings of the Royal Society B: Biological Sciences.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.

The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.

But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”

His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.

A mysterious 14-year cycle has been controlling our words for centuries (Science Alert)

Some of your favourite science words are making a comeback.

DAVID NIELD
2 DEC 2016

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.

You can try it out for yourself, and search for any word’s popularity over time.

But if there are certain nouns you’re fond of, make the most of them, because they might not be in common use for much longer.

The findings have been published in Palgrave Communications.

Most adults know more than 42,000 words (Science Daily)

Date:
August 16, 2016
Source:
Frontiers
Summary:
Armed with a new list of words and using the power of social media, a new study has found that by the age of 20, a native English-speaking American knows 42,000 dictionary words.

Dictionary. How many words do you know? Credit: © mizar_21984 / Fotolia

How many words do we know? It turns out that even language experts and researchers have a tough time estimating this.

Armed with a new list of words and using the power of social media, a new study published in Frontiers in Psychology, has found that by the age of twenty, a native English speaking American knows 42 thousand dictionary words.

“Our research got a huge push when a television station in the Netherlands asked us to organize a nation-wide study on vocabulary knowledge,” states Professor Marc Brysbaert of Ghent University in Belgium and leader of this study. “The test we developed was featured on TV and, in the first weekend, over 300 thousand Dutch speakers had done it — it really went viral.”

Realising how interested people are in finding out their vocabulary size, the team then made similar tests in English and Spanish. The English test has now been taken by almost one million people. It takes up to four minutes to complete and has been shared widely on Facebook and Twitter, giving the team access to an unprecedented amount of data.

“At the Centre of Reading Research we are investigating what determines the ease with which words are recognized;” explained Professor Brysbaert. The test includes a list of 62,000 words that he and his team have compiled.

He added: “As we made the list ourselves and have not used a commercially available dictionary list with copyright restrictions, it can be made available to everyone, and all researchers can access it.”

The test is simple. You are asked if the word on the screen is, or is not, an existing word in English. In each test, there are 70 words, and 30 letter sequences that look like words but are not actually existing words.

The test will also ask you for some personal information such as your age, gender, education level and native language. This has enabled the team to discover that the average twenty-year-old native English speaking American knows 42 thousand dictionary words. As we get older, we learn one new word every two days, which means that by the age of 60, we know an additional 6000 words.

“As a researcher, I am most interested in what this data can tell us about word prevalence, i.e. how well each word is known in a language;” added Professor Brysbaert.

“In Dutch, we have seen that this explains a lot about word processing times. People respond much faster to words known by all people than to words known by 95% of the population, even if the words used with the same frequency. We are convinced that word prevalence will become an important variable in word recognition research.”

With data from about 200 thousand people who speak English as a second language, the team can also start to look at how well these people know certain words, which could have implications for language education.

This is the largest study of its kind ever attempted. Professor Brysbaert has plans to improve the accuracy of the test and extend the list to include over 75,000 words.

“This work is part of the big data movement in research, where big datasets are collected to be mined;” he concluded.

“It also gives us a snapshot of English word knowledge at the beginning of the 21st century. I can imagine future language researchers will be interested in this database to see how English has evolved over 100 years, 1000 years and maybe even longer.”


Journal Reference:

  1. Marc Brysbaert, Michaël Stevens, Paweł Mandera, Emmanuel Keuleers. How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s AgeFrontiers in Psychology, 2016; 7 DOI: 10.3389/fpsyg.2016.01116

Presidente de Portugal quer fazer revisão do novo acordo ortográfico (Folha de S.Paulo)

Giuliana Miranda, 15/05/2016

Oficialmente, o último acordo ortográfico está em vigor em Portugal desde 2009, mas ainda enfrenta resistência em vários setores. Na semana passada, o time dos descontentes recebeu um apoio de peso: o novo presidente português se mostrou favorável à revisão das regras.

Em visita a Moçambique —país lusófono que, assim como Angola, não ratificou as mudanças—, Marcelo Rebelo de Sousa admitiu que a não adesão dos africanos pode permitir a Portugal também rever sua posição no acordo.

Mauro Vombe – 4.mai.2016/Xinhua
O presidente português, Marcelo Rebelo de Sousa, (esq.) saúda o colega moçambicano Filipe Nyusi
O presidente português, Marcelo Rebelo de Sousa, (esq.) saúda o colega moçambicano Filipe Nyusi

Na quarta-feira (11), a Associação Nacional de Professores de Português e vários membros da organização “Cidadãos contra o Acordo Ortográfico” recorreram à Justiça pedindo a anulação da norma que disseminou o uso da nova ortografia no país.

No cargo há dois meses, Rebelo de Sousa nunca escondeu sua contrariedade sobre o tema. Na década de 1990, ele assinou um manifesto que reuniu 400 personalidades portuguesas contrárias ao acordo ortográfico.

Embora as críticas públicas tenham se abrandado, o livro de imagens de sua campanha à Presidência, “Afectos”, não adota as mudanças ortográficas nem no título.

Em “O Acordo Ortográfico Não Está Em Vigor” (ed. Guerra & Paz), o embaixador e professor de direito internacional Carlos Fernandes diz que o acordo fere também princípios jurídicos e, por isso, não deveria ser adotado.

Segundo Fernandes, além de as regras anteriores não terem sido oficialmente revogadas, o governo português tampouco cumpriu trâmites legais obrigatórios para a entrada em vigor dos novos parâmetros da língua.

O debate sobre uma possível revisão do acordo —há quem defenda até um referendo— provocou uma “caça às bruxas” ortográfica. Vários políticos tiveram currículos, biografias e livros vasculhados em busca de indícios de que são contrários às mudanças na escrita.

CRÍTICAS AO BRASIL

Embora tenha sido assinado em 1990 pelos Estados de língua oficial portuguesa, o acordo precisa passar por ratificação interna em cada país para entrar em vigor. Brasil, Portugal, São Tome e Príncipe e Cabo Verde já promulgaram a decisão.

Já Angola e Moçambique —que concentram a maioria dos falantes do português depois do Brasil— ainda não têm data para ratificar.

O português é a quinta língua mais falada do mundo, com cerca de 280 milhões de falantes, dos quais 202 milhões estão no Brasil, 24,7 milhões em Angola, 24,6 milhões em Moçambique e 10,8 milhões em Portugal.

Entre os críticos portugueses e africanos, as alterações são encaradas como submissão aos desejos do Brasil. A língua oficial do país é várias vezes pejorativamente chamada de “brasileiro”.

Um dos motivos da discórdia é o fim das consoantes mudas presentes em várias palavras de Portugal. Com o acordo, prevaleceu a versão brasileira. Por exemplo: actor vira ator e óptimo, ótimo.

Segundo o Ministério da Educação brasileiro, as mudanças afetaram cerca de 0,8% dos vocábulos do Brasil e 1,3% dos de Portugal.

GOVERNO DEFENDE

O governo de Portugal segue o acordo ortográfico, e vários ministros saíram em defesa das regras.

Considerado o pai do acordo e um dos mais mais influentes linguistas lusitanos, Malaca Casteleiro também tem defendido sua aplicação.

O primeiro-secretário do Brasil em Lisboa, André Pinto Pacheco, afirmou que ” a embaixada acompanha com atenção o assunto, procurando esclarecer o Estado e a opinião pública de Portugal sobre a aplicação do Acordo Ortográfico da Língua Portuguesa no Brasil”.

Diretor do setor de lexicografia e lexicologia da Academia Brasileira de Letras, Evanildo Bechara minimizou as críticas do presidente português e ressaltou o ritmo da implementação do acordo na comunidade lusófona. “É um processo irreversível.”

“Uma alteração ortográfica não é para a geração que a fez, mas para uma geração futura”, afirmou Bechara. O uso da nova ortografia é obrigatório no Brasil desde 1º de janeiro deste ano.

Words for snow revisited: Languages support efficient communication about the environment (Carnegie Mellon University)

13-APR-2016

CARNEGIE MELLON UNIVERSITY

 

The claim that Eskimo languages have many words for different types of snow is well known among the public, but it has been greatly exaggerated and is therefore often dismissed by scholars of language. However, a new study published in PLOS ONE supports the general idea behind the original claim.

The claim that Eskimo languages have many words for different types of snow is well known among the public, but it has been greatly exaggerated and is therefore often dismissed by scholars of language.

However, a new study published in PLOS ONE supports the general idea behind the original claim. Carnegie Mellon University and University of California, Berkeley researchers found that languages that use the same word for snow and ice tend to be spoken in warmer climates, reflecting lower communicative need to talk about snow and ice.

“We wanted to broaden the investigation past Eskimo languages and look at how different languages carve up the world into words and meanings,” said Charles Kemp, associate professor of psychology in CMU’s Dietrich College of Humanities and Social Sciences.

For the study, Kemp, and UC Berkeley’s Terry Regier and Alexandra Carstensen analyzed the connection between local climates, patterns of language use and word(s) for snow and ice across nearly 300 languages. They drew on multiple sources of data including library reference works, Twitter and large digital collections of linguistic and meteorological data.

The results revealed a connection between temperature and snow and ice terminology, suggesting that local environmental needs leave an imprint on languages. For example, English originated in a relatively cool climate and has distinct words for snow and ice. In contrast, the Hawaiian language is spoken in a warmer climate and uses the same word for snow and for ice. These cases support the claim that languages are adapted to the local communicative needs of their speakers — the same idea that lies behind the overstated claim about Eskimo words for snow. The study finds support for this idea across language families and geographic areas.

“These findings don’t resolve the debate about Eskimo words for snow, but we think our question reflects the spirit of the initial snow claims — that languages reflect the needs of their speakers,” said Carstensen, a psychology graduate student at UC Berkeley.

The researchers suggest that in the past, excessive focus on the specific example of Eskimo words for snow may have obscured the more general principle behind it.

Carstensen added, “Here, we deliberately asked a somewhat different question about a broader set of languages.”

The study also connects with previous work that explores how the sounds and structures of language are shaped in part by a need for efficiency in communication.

“We think our study reveals the same basic principle at work, modulated by local communicative need,” said Regier, professor of linguistics and cognitive science at UC Berkeley.

###

Read the full study at http://dx.plos.org/10.1371/journal.pone.0151138.

 

An Heir to a Tribe’s Culture Ensures Its Language Is Not Forgotten (New York Times)

Mr. Grant estimates that thousands of students have read the books and taken courses on the language, first through informal workshops held in the nation’s capital, Canberra, from the early 1990s. In December 2015, at a branch of Charles Sturt University in Wagga Wagga, New South Wales, students completed the first-ever course in Wiradjuri.

 To a great extent, Mr. Grant is carrying out a promise to his beloved grandfather, who singled him out as a youngster as his heir to Wiradjuri culture.

“My grandfather was a Wiradjuri elder,” he said, and was anxious to pass along the culture. “But he was arrested after he called to me in Wiradjuri to come home from the park. ‘Barray yanha, barray yanha,’ ‘Come quickly,’ he called out.”

Mr. Grant was probably 8 or 9 years old the night a local policeman heard his grandfather, Wilfred Johnson, and locked him up. But he does not recall a sense of alarm.

“He was an elegant man,” he said of Mr. Johnson. “He was beautifully dressed, usually in a coat and hat. But he was black. So it wasn’t the first time he had spent the night in jail.”

After the arrest, Mr. Johnson, who spoke seven languages, refused to speak Wiradjuri in public.

“He was a linguist with enormous respect for his own people and culture,” said Mr. Grant, who speaks three languages himself: Italian, which he picked up while working at the sawmill, as well as English and Wiradjuri. “But he told me, ‘Things are different now.’ He would only speak his language in the bush.”

It was during those expeditions into the backcountry that Mr. Grant learned Wiradjuri, as well as tracking and hunting skills. He knows that a echidna’s back feet turn inward, complicating tracking. He can describe how his grandfather made a lasso out of long grass to catch a stunned goanna, a type of lizard, for dinner, and he says a rope laid around a bush house will stop snakes from passing over the threshold.

Lloyd Dolan, a Wiradjuri lecturer who has worked with Mr. Grant, said elders took risks teaching Wiradjuri to their children. Mr. Dolan also learned Wiradjuri from his grandfather. His mother forbade him to speak it at home.

“There was a real fear that the children would be taken away if authorities heard kids speaking the language,” Mr. Dolan, 49, said from his office at Charles Sturt University. “The drive to assimilate Aboriginals into white society was systemic.”

Aboriginal people had no right to vote in elections before 1962, and they were counted as wildlife until a change to Australia’s Constitution in 1967.

Mr. Grant grew up in poverty, his family drifting from place to place: Redfern, a rough-and-tumble Sydney suburb; Griffith, a village 60 miles northwest of Narrandera, where he lives now, and Wagga Wagga, which is 62 miles southeast of that.

He recalls vividly moving from a “humpy,” a dirt-floored makeshift shack, consisting of just a few rooms, on the fringe of a country town, into a house with electricity. “It was the first time we had electricity at home, but it wasn’t on much because we had no money to pay for it,” he said with a laugh.

As a child, Mr. Grant said, he scorned his grandfather’s ways. He was embarrassed to be black. By the time he was 17, in 1957, his grandfather had died, and he had dropped out of school, left home and found a job on the railways.

Soon, he moved from a small town to Sydney, where he says he drank a lot, got a tattoo of a roughly drawn dagger and eventually found himself in jail.

“I cried and cried when that happened,” he said. “I had been drinking and probably brawling, and I didn’t want to be there.”

It was his wife, Betty, now 73, who helped turn his life around. After marrying in August 1962, they spent several weeks living out of a shell of a car on the Aboriginal Three Ways Mission on the fringe of Griffith, in central New South Wales.

Mr. Grant soon found a job at a sawmill, and although an accident mangled two fingers of his left hand, it was steady work. He and his wife started a family.

Around that time, Aboriginal activists began agitating for civil rights. In 1965, Charles Perkins, the first Aboriginal to attend the University of Sydney, led 35 student protesters on a Freedom Ride bus tour around outback country towns. They were pelted with gravel and harassed as they went from small town to small town, where they called for an end to segregated seating on buses and in theaters. They demanded equal service in shops and hotels, and they wanted Aboriginal children admitted to municipal swimming pools with white children.

Six years later, Neville Bonner, a leader from an Aboriginal rights organization, became the first Aboriginal to gain a seat in Australia’s Parliament, filling a Senate vacancy left by a Queenslander who had resigned.

With the help of these small civic changes, Mr. Grant, whose formal education ended at age 15, managed to navigate a way forward for himself and his family. He first found work in Canberra helping Aboriginal children who had skipped school.

Around the same time, there was a push to document Aboriginal culture and language, which was rarely written down. As one of the few who knew Wiradjuri language, he was approached about writing it down. That eventually led him to teaching his language and writing “A New Wiradjuri Dictionary,” published in 2005.

“I was told when you revive a lost language, you give it back to all mankind,” he said, sitting in his kitchen, not far from where the kingfishers darted across the Murrumbidgee.

“We were a nothing people for a long time. And it is a big movement now, learning Wiradjuri. I’ve done all that work. I’ve done all I can.”

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Indígena de 81 anos aprende a usar computador e cria dicionário para salvar seu idioma da extinção (QGA)

Marie Wilcox é a última pessoa no mundo fluente no idioma Wukchumi

Conheça Marie Wilcox, uma bisavó de 81 anos e a última pessoa no mundo fluente no idioma Wukchumi. O povo Wukchumi costumava ter uma população de 50.000 pessoas antes de terem contato com os colonizadores, mas agora são somente 200 pessoas vivendo no Vale de São Joaquim, na Califórnia. Sua linguagem foi morrendo aos poucos a cada nova geração, mas Marie se comprometeu com a tarefa de revivê-la, aprendendo a usar um computador para que conseguisse começar a escrever o primeiro dicionário Wukchumni. O processo levou sete anos, e agora que terminou ela não pretende parar seu trabalho de imortalizar sua língua nativa.

O documentário “Marie’s Dictionary”, disponível no Youtube, nos mostra a motivação de Marie e seu trabalho árduo para trazer de volta e registrar um idioma que foi quase totalmente apagado pela colonização, racismo institucionalizado e opressão.

No vídeo, Marie admite ter dúvidas sobre a gigantesca tarefa que ela se comprometeu: “Eu tenho dúvidas sobre minha língua, e sobre quem quer mantê-la viva. Ninguém parece querer aprender. É estranho que eu seja a última… Tudo vai estar perdido algum dia desses, não sei”.

Mas com sorte, esse dia ainda vai demorar. Marie e sua filha Jennifer agora dão aulas para membros da tribo, e trabalham num dicionário em áudio para acompanhar o dicionário escrito que ela já criou.

Veja o vídeo (em inglês).

(QGA)

Ora pois, uma língua bem brasileira (Pesquisa Fapesp)

Análise de textos antigos e de entrevistas expõe as marcas próprias do idioma no país, o alcance do R caipira e os lugares que preservam modos antigos de falar

CARLOS FIORAVANTI | ED. 230 | ABRIL 2015

Estudo para Partida da monção, 1897, de Almeida Júnior (Acervo Pinacoteca do Estado de SP). Os bandeirantes saíam de Porto Feliz rumo ao Centro-Oeste

Estudo para Partida da monção, 1897, de Almeida Júnior (Acervo Pinacoteca do Estado de SP). Os bandeirantes saíam de Porto Feliz rumo ao Centro-Oeste

A possibilidade de ser simples, dispensar elementos gramaticais teoricamente essenciais e responder “sim, comprei”, quando alguém pergunta “você comprou o carro?”, é uma das características que conferem flexibilidade e identidade ao português brasileiro. A análise de documentos antigos e de entrevistas de campo ao longo dos últimos 30 anos está mostrando que o português brasileiro já pode ser considerado único, diferente do português europeu, do mesmo modo que o inglês americano é distinto do inglês britânico. O português brasileiro ainda não é, porém, uma língua autônoma: talvez seja – na previsão de especialistas, em cerca de 200 anos – quando acumular peculiaridades que nos impeçam de entender inteiramente o que um nativo de Portugal diz.

A expansão do português no Brasil, as variações regionais com suas possíveis explicações, que fazem o urubu de São Paulo ser chamado de corvo no Sul do país, e as raízes das inovações da linguagem estão emergindo por meio do trabalho de cerca de 200 linguistas. De acordo com estudos da Universidade de São Paulo (USP), uma inovação do português brasileiro, por enquanto sem equivalente em Portugal, é o Rcaipira, às vezes tão intenso que parece valer por dois ou três, como em porrrta ou carrrne.

Associar o R caipira apenas ao interior paulista, porém, é uma imprecisão geográfica e histórica, embora o R desavergonhado tenha sido uma das marcas do estilo matuto do ator Amácio Mazzaropi em seus 32 filmes, produzidos de 1952 a 1980. Seguindo as rotas dos bandeirantes paulistas em busca de ouro, os linguistas encontraram o Rsupostamente típico de São Paulo em cidades de Minas Gerais, Mato Grosso, Mato Grosso do Sul, Paraná e oeste de Santa Catarina e do Rio Grande do Sul, formando um modo de falar similar ao português do século XVIII. Quem tiver paciência e ouvido apurado poderá encontrar também na região central do Brasil – e em cidades do litoral – o S chiado, uma característica hoje típica do falar carioca que veio com os portugueses em 1808 e era um sinal de prestígio por representar o falar da Corte. Mesmo os portugueses não eram originais: os especialistas argumentam que o Schiado, que faz da esquina uma shquina, veio dos nobres franceses, que os portugueses admiravam.

A história da língua portuguesa no Brasil está trazendo à tona as características preservadas do português, como a troca do L pelo R, resultando em pranta em vez deplanta. Camões registrou essa troca em Os lusíadas – lá está um frautas no lugar de flautas – e o cantor e compositor paulista Adoniran Barbosa a deixou registrada em diversas composições, em frases como “frechada do teu olhar”, do samba Tiro ao Álvaro. Em levantamentos de campo, pesquisadores da USP observaram que moradores do interior tanto do Brasil quanto de Portugal, principalmente os menos escolarizados, ainda falam desse modo. Outro sinal de preservação da língua identificado por especialistas do Rio de Janeiro e de São Paulo, dessa vez em documentos antigos, foi a gente ou as gentes como sinônimo de “nós” e hoje uma das marcas próprias do português brasileiro.

Célia Lopes, da Universidade Federal do Rio de Janeiro (UFRJ), encontrou registros de a gente em documentos do século XVI e, com mais frequência, a partir do século XIX. Era uma forma de indicar a primeira pessoa do plural, no sentido de todo mundo com a inclusão necessária do eu. Segundo ela, o emprego de a gente pode passar descompromisso e indefinição: quem diz a gente em geral não deixa claro se pretende se comprometer com o que está falando ou se se vê como parte do grupo, como em “a gente precisa fazer”. Já o pronome nós, como em “nós precisamos fazer”, expressa responsabilidade e compromisso. Nos últimos 30 anos, ela notou, a gente instalou-se nos espaços antes ocupados pelo nós e se tornou um recurso bastante usado por todas as idades e classes sociais no país inteiro, embora nos livros de gramática permaneça na marginalidade.

Linguistas de vários estados do país estão desenterrando as raízes do português brasileiro ao examinar cartas pessoais e administrativas, testamentos, relatos de viagens, processos judiciais, cartas de leitores e anúncios de jornais desde o século XVI, coletados em instituições como a Biblioteca Nacional e o Arquivo Público do Estado de São Paulo. A equipe de Célia Lopes tem encontrado também na feira de antiguidades do sábado da Praça XV de Novembro, no centro do Rio, cartas antigas e outros tesouros linguísticos, nem sempre valorizados. “Um estudante me trouxe cartas maravilhosas encontradas no lixo”, ela contou.

Sem título da série Estudo para bandeirantes, sem data, de Henrique Bernardelli, (Acervo Pinacoteca do Estado de SP) paulistas expandiram a língua portuguesa conquistando  outras regiões

De vossa mercê para 
Os documentos antigos evidenciam que o português falado no Brasil começou a se diferenciar do europeu há pelo menos quatro séculos. Uma indicação dessa separação é o Memórias para a história da capitania de São Vicente, de 1793, escrito por frei Gaspar da Madre de Deus, nascido em São Vicente, e depois reescrito pelo português Marcelino Pereira Cleto, que foi juiz em Santos. Comparando as duas versões, José Simões, da USP, encontrou 30 diferenças entre o português brasileiro e o europeu. Uma delas é encontrada ainda hoje: como usuários do português brasileiro, preferimos explicitar os sujeitos das frases, como em “o rapaz me vendeu o carro, depois ele saiu correndo e ao atravessar a rua ele foi atropelado”. Em português europeu, seria mais natural omitir o sujeito, já definido pelo tempo verbal – “o rapaz vendeu-me o carro, depois saiu a correr…” –, resultando em uma construção gramaticalmente impecável, embora nos soe um pouco estranha.

Um morador de Portugal, se lhe perguntarem se comprou um carro, responderá com naturalidade “sim, comprei-o”, explicitando o complemento do verbo, “mesmo entre falantes pouco escolarizados”, observa Simões. Ele nota que os portugueses usam mesóclise – “dar-lhe-ei um carro, com certeza!” –, que soaria pernóstica no Brasil. Outra diferença é a distância entre a língua falada e a escrita no Brasil. Ninguém fala muito, mas muinto. O pronome você, que já é uma redução de vossa mercê e de vosmecê, encolheu ainda mais, para , e grudou no verbo: cevai?

“A língua que falamos não é a que escrevemos”, diz Simões, com base em exemplos como esses. “O português escrito e o falado em Portugal são mais próximos, embora também existam diferenças regionais.” Simões complementa as análises textuais com suas andanças por Portugal. “Há 10 anos meus parentes de Portugal diziam que não entendiam o que eu dizia”, ele observa. “Hoje, provavelmente por causa da influência das novelas brasileiras na televisão, dizem que já estou falando um português mais correto.”

“Conservamos o ritmo da fala, enquanto os europeus começaram a falar mais rápido a partir do século XVIII”, observa Ataliba Castilho, professor emérito da USP, que, nos últimos 40 anos, planejou e coordenou vários projetos de pesquisa sobre o português falado e a história do português do Brasil. “Até o século XVI”, diz ele, “o português brasileiro e o europeu eram como o espanhol, com um corte silábico duro. A palavra falada era muito próxima da escrita”. Célia Lopes acrescenta outra diferença: o português brasileiro conserva a maioria das vogais, enquanto os europeus em geral as omitem, ressaltando as consoantes, e diriam tulfón para se referir ao telefone.

Há também muitas palavras com sentidos diferentes de um lado e de outro do Atlântico. Os estudantes das universidades privadas não pagam mensalidade, mas propina. Bolsista é bolseiro. Como os europeus não adotaram algumas palavras usadas no Brasil, a exemplo de bunda, de origem africana, podem surgir situações embaraçosas. Vanderci Aguilera, professora sênior da Universidade Estadual de Londrina (Uel) e uma das linguistas empenhadas no resgate da história do português brasileiro, levou uma amiga portuguesa a uma loja. Para ver se um vestido que acabava de experimentar caía bem às costas, a amiga lhe perguntou: “O que achas do meu rabo?”.

016-023_CAPA_Portugues_230O soldado e a filha do fazendeiro
No acervo de documentos sobre a evolução do português paulista, está uma carta de 1807, escrita pelo soldado Manoel Coelho, que teria seduzido a filha de um fazendeiro. Quando soube, o pai da moça, enfurecido, forçou o rapaz a se casar com ela. O soldado, porém, bateu o pé: não se casaria, como ele escreveu, “nem por bem nem por mar”. Simões estranhou a citação ao mar, já que o quiproquó se passava na então vila de São Paulo, mas depois percebeu: “Olha o Rcaipira! Ele quis dizer ‘nem por bem nem por mal!’”. O soldado escrevia como falava, não se sabe se casou com a filha do fazendeiro, mas deixou uma prova valiosa de como se falava no início do século XIX.

“O R caipira era uma das características da língua falada na vila de São Paulo, que aos poucos, com a crescente urbanização e a chegada de imigrantes europeus, foi expulsa para a periferia ou para outras cidades”, diz Simões. “Era a língua dos bandeirantes.” Os especialistas acreditam que os primeiros moradores da vila de São Paulo, além de porrta, pulavam consoantes no meio das palavras, falando muié em vez de mulher, por exemplo. Para aprisionar índios e, mais tarde, para encontrar ouro, os bandeirantes conquistaram inicialmente o interior paulista, levando seu vocabulário e seu modo de falar. O R exagerado ainda pode ser ouvido nas cidades do chamado Médio Tietê como Santana de Parnaíba, Pirapora do Bom Jesus, Sorocaba, Itu, Tietê, Porto Feliz e Piracicaba, cujos moradores, principalmente os do campo, o pintor ituano José Ferraz de Almeida Júnior retratou, até ser assassinado pelo marido de sua amante em Piracicaba. Os bandeirantes seguiram depois para outras matas da imensa Capitania de São Paulo, constituída em 1709 com os territórios dos atuais estados de São Paulo, Mato Grosso do Sul, Mato Grosso, Rondônia, Tocantins, Minas Gerais, Paraná e Santa Catarina (ver mapa).

Manoel Mourivaldo Almeida, também da USP, encontrou sinais do português paulista antigo em Cuiabá, a capital de Mato Grosso, que permaneceu com relativamente pouca interação linguística e cultural com outras cidades depois do fim do auge da mineração de ouro, há dois séculos. “O português culto dos séculos XVI ao XVII tinha um Schiado”, conclui Almeida. “Os paulistas, quando foram para o Centro-Oeste, falavam como os cariocas hoje!” O ator e diretor teatral cuiabano Justino Astrevo de Aguiar reconhece a herança paulista e carioca, mas considera um traço mais evidente do falar local o hábito de acrescentar um J ou um T antes ou no meio das palavras, como em djeitocadju ou tchuva, uma característica da pronúncia típica do século XVII, que Almeida identificou também entre moradores de Goiás, Minas Gerais, Maranhão e na região da Galícia, na Espanha.

Almeida apurou o ouvido para as variações do português no Brasil por conta de sua própria história. Filho de portugueses, nasceu em Piritiba, interior da Bahia, saiu de lá aos 7 anos, morou em Jaciara, interior de Mato Grosso, e depois 25 anos em Cuiabá, foi professor da universidade federal e se mudou para São Paulo em 2003. Ele reconhece que fala como paulista nos momentos mais formais – embora prefira falar éxtra em vez de êxtra como os paulistas –, mas quando descontrai assume o ritmo de falar baiano e o vocabulário matogrossense. Ele estuda o modo de falar cuiabano desde 1991, por sugestão de um colega professor, Leônidas Querubim Avelino, especialista em Camões, que havia verificado sinais do português arcaico por lá. Avelino lhe contou que um roceiro cego de Livramento, a 30 quilômetros de Cuiabá, comentou que ele estava “andando pusilo”, no sentido de fraco. Avelino reconheceu uma forma reduzida de pusilânime, que não era mais usada em Portugal.

“Os moradores de Cuiabá e de algumas outras cidades, como Cáceres e Barão de Melgado, em Mato Grosso, e Corumbá, em Mato Grosso do Sul, preservam o português paulista do século XVIII mais do que os próprios paulistas. Paulistas do interior e também da capital hoje falam dia, com um d seco, enquanto na maior parte do Brasil se diz djia”, observou Almeida. “O modo de falar pode mudar dependendo do acesso à cultura, da motivação e da capacidade de perceber e articular sons de modo diferente. Quem procurar nos lugares mais distantes dos grandes centros urbanos vai encontrar sinais de preservação do português antigo.”

Rua 25 de março, 1894, de Antonio Ferrigno (Acervo Pinacoteca do Estado de SP). A cidade de São Paulo tinha um sotaque próprio

Rua 25 de março, 1894, de Antonio Ferrigno (Acervo Pinacoteca do Estado de SP). A cidade de São Paulo tinha um sotaque próprio

De 1998 a 2003, uma equipe coordenada por Heitor Megale, da USP, seguiu a rota das bandeiras do século XVI em busca de traços da língua portuguesa antiga que tenham permanecido ao longo de quatro séculos. As entrevistas com moradores com 60 anos a 90 anos de quase 40 cidades ou povoados de Minas Gerais, Goiás e Mato Grosso trouxeram à tona termos esquecidos como mamparra(fingimento) e mensonha (mentira), uma palavra de um dos poemas de Francisco de Sá de Miranda do século XV, treição, usada no interior de Goiás no sentido de surpresa, e termos da linguagem popular ainda usados em Portugal, como despoispercisão e tristura, comuns no sul de Minas. O que parecia anacronismo ganhou valor. Dizer sancristia em vez de sacristia não era um erro, “mas uma influência preservada do passado, quando a pronúncia era assim”, relatou o Jornal da Manhã, de Paracatu, Minas, em 20 de dezembro de 2001.

Ao norte, a língua portuguesa expandiu-se para o interior a partir da cidade de Salvador, que foi a capital do Brasil Colônia durante três séculos. Salvador era também um centro de fermentação da língua, por receber multidões de escravos africanos, que aprendiam o português como língua estrangeira, mas também ofereciam seu vocabulário, ao qual já haviam se somado as palavras indígenas.

Para impedir que a língua de Camões se desfigurasse ao cruzar com os dialetos nativos, Sebastião José de Carvalho e Melo, o Marquês de Pombal, secretário de Estado do reino, resolveu agir. Em 1757, Pombal expulsou os jesuítas, entre outras razões de ordem política, porque estavam ensinando a doutrina cristã em língua indígena, e, por decreto, fez do português a língua oficial do Brasil. O português se impôs sobre as línguas nativas e ainda hoje é a língua oficial, embora os linguistas alertem que não possa ser chamada de nacional por causa das 180 línguas indígenas faladas no país (eram 1.200, estima-se, quando os portugueses chegaram). A miscigenação linguística, que reflete a mistura de povos formadores do país, explica em boa parte as variações regionais de vocabulário e de ritmos, sintetizadas em um mapa dos falares do Museu da Língua Portuguesa, em São Paulo. É fácil encontrar variações em um mesmo estado: os moradores do norte de Minas falam como os baianos, os da região central mantêm o autêntico mineirês, no sul a influência paulista é intensa e a leste o modo de falar assemelha-se ao sotaque carioca.

A pandorga e o bigato
Há 10 anos um grupo de linguistas estuda um dos resultados da miscigenação linguística: os diferentes nomes com que um mesmo objeto pode ser chamado, registrados por meio de entrevistas com 1.100 pessoas em 250 localidades. Brasil afora, o brinquedo feito de papel e varetas que se empina ao vento por meio de uma linha é chamado de papagaio, pipa, raia ou pandorga – ou ainda coruja em Natal e João Pessoa –, de acordo com o primeiro volume do Atlas linguístico do Brasil, publicado em outubro de 2014 com os resultados das entrevistas nas capitais (Editora UEL). Já o aparelho com luzes vermelha, amarela e verde usado em cruzamentos de ruas para regular o trânsito é chamado apenas de sinal no Rio de Janeiro e em Belo Horizonte e também de semáforo nas capitais do Norte e Nordeste. Goiânia registrou os quatro nomes para o mesmo objeto: sinal, semáforo, sinaleiro e farol.

Começa agora a busca de explicações para essas diferenças. “Onde nasci, em Sertanópolis, a 42 quilômetros de Londrina”, disse Vanderci Aguilera, uma das coordenadoras do Atlas, “chamamos bicho de goiaba de bigato por influência dos colonizadores, que eram imigrantes italianos vindos do interior paulista”. Segundo ela, os moradores dos três estados do Sul chamam urubu de corvo por influência dos europeus, enquanto os do Sudeste mantiveram o nome tupi, urubu.

Cena de família de Adolfo Augusto Pinto, 1891, de Almeida Júnior (Acervo Pinacoteca do Estado de SP).No final do século XIX o pronome você já era mais formal que o tu

Cena de família de Adolfo Augusto Pinto, 1891, de Almeida Júnior (Acervo Pinacoteca do Estado de SP). No final do século XIX o pronome você já era mais formal que o tu

Cada estado – ou região – tem seu próprio patrimônio linguístico, que deve ser respeitado, enfatizam os especialistas. Os professores de português, alerta Vanderci, não deveriam repreender os alunos por chamarem beija-flor de cuitelo, como é comum no interior do Paraná, nem recriminar os que dizem carochurascoou baranco, como é comum entre os descendentes de poloneses e alemães no Sul, mas ensinar outras formas de falar e deixar a meninada se expressar como quiser quando estiver com a família ou com os amigos. “Ninguém fala errado”, ela enfatiza. “Todo mundo fala de acordo com sua história de vida, com o que foi transmitido pelos pais e depois modificado pela escola. Nossa fala é nossa identidade, não temos por que nos envergonhar.”

A diversidade do português brasileiro é tão grande que, apesar do empenho dos locutores de telejornais de alcance nacional em tentar criar uma língua neutra, despida de sotaques locais, “não há um padrão nacional”, assegura Castilho. “Há diferenças de vocabulário, gramática, sintaxe e pronúncia mesmo entre pessoas que adotam a norma culta”, diz ele. Insatisfeito com as teorias importadas, Castilho criou a abordagem multissistêmica da linguagem, segundo a qual qualquer expressão linguística mobiliza simultaneamente quatro planos (léxico, semântica, discurso e gramática), que deveriam ser vistos de modo integrado e não mais separadamente. Ao lado de Verena Kewitz, da USP, ele tem debatido essa abordagem com estudantes de pós-graduação e com outros especialistas do Brasil e no exterior.

Também está claro que o português brasileiro se refaz continuamente. As palavras podem morrer ou ganhar novos sentidos. Almeida contou que Celciane Vasconcelos, uma das estudantes de seu grupo, verificou que somente os moradores mais antigos do litoral paranaense conheciam a palavra sumaca, um tipo de barco antes comum, que hoje não se constrói mais, tirando a antiga serventia da palavra que hoje nomeia uma praia em Paraty (RJ). Os modos antigos de falar podem ressurgir. O R caipira, asseguram os linguistas, está voltando, até mesmo em São Paulo, e readquirindo status, na esteira dos cantores de música sertaneja. “Hoje ser caipira é chique”, assegura Vanderci. Ou ao menos é aceitável e parte do estilo pessoal, como o da apresentadora de TV Sabrina Sato.

Bilhetes de amor
Os linguistas têm notado a expansão do tratamento informal. “Tenho 78 anos e devia ser tratado por senhor, mas meus alunos mais jovens me tratam por você”, diz Castilho, aparentemente sem se incomodar com a informalidade, inconcebível em seus tempos de estudante. O você, porém, não reinará sozinho. Célia Lopes, com sua equipe da UFRJ, verificou que o tu predomina em Porto Alegre e convive com o você no Rio de Janeiro e em Recife, enquanto você é o tratamento predominante em São Paulo, Curitiba, Belo Horizonte e Salvador. O tu já era mais próximo e menos formal que vocênas quase 500 cartas do acervo on-line da UFRJ, quase todas de poetas, políticos e outras personalidades do final do século XIX e início do XX.

Como ainda faltava a expressão do falar das pessoas comuns, Célia e sua equipe exultaram ao encontrar 13 bilhetes escritos em 1908 por Robertina de Souza para seu amante e para seu marido. Esse material era parte de um processo-crime movido contra o marido, que expulsou de sua casa um amigo e a própria mulher ao saber que tinham tido um caso extraconjungal e depois matou o ex-amigo. Em um dos 11 bilhetes para o amante, Álvaro Mattos, Robertina, que assinava como Chininha, escreveu: “Eu te adoro te amo até a morte sou tua só tu é meu só o meu coracao e teu e o teu coracao é meu. Chininha e todinha tua ate a morte”. Já o marido, Arthur Noronha, que recebeu apenas dois bilhetes, ela tratava de modo mais formal: “Eu rezo pedindo a Deus para você me perdoar, mas creio que voce não tem coragem de ver morrer um filho o filha”. E mais adiante: “Não posso me separar de voce e do meu filho a não ser com a morte”. Não se sabe se ela voltou para casa, mas o marido foi absolvido, por alegar que matou o outro homem em defesa da honra.

Outro sinal da evolução do português brasileiro são as construções híbridas, com um verbo que não concorda mais com o pronome, do tipo tu não sabe?, e a mistura dos pronomes de tratamento você e tu, como em “se você precisar, vou te ajudar”. Os portugueses europeus poderiam alegar que se trata de mais uma prova de nossa capacidade de desfigurar a língua lusitana, mas talvez não tenham tanta razão para se queixar. Célia Lopes encontrou a mistura de pronomes de tratamento, que ela e outros linguistas não consideram mais um erro, em cartas do marquês do Lavradio, que foi vice-rei do Brasil de 1769 a 1796, e, mais de dois séculos depois, em uma entrevista do ex-presidente Fernando Henrique Cardoso.

Projeto
Projeto de história do português paulista (PHPP – Projeto Caipira) (nº 11/51787-5); Modalidade Projeto Temático; Pesquisador responsável Manoel Mourivaldo Santiago Almeida(USP); Investimento R$ 87.372,10 (FAPESP).

Noemi Jaffe: A semântica da seca (Folha de S.Paulo)

26/02/2015  02h00

Emmanuel Levinas disse que a “consciência é a urgência de uma destinação dirigida a outro, e não um eterno retorno sobre si mesmo”. Penso que, embora não pareça, a frase se relaciona intimamente à “crise hídrica” em São Paulo.

Temos sido obrigados a ouvir e a falar em “crise hídrica”, na “maior seca em 84 anos” e expressões afins, que culpam a natureza, e não em catástrofe, colapso, responsabilidade ou palavras de igual gravidade.

O cidadão comum vive, na gestão do governo paulista, sob um regime eufemístico de linguagem, em aparência elegante, mas, na verdade, retoricamente totalitário, com o qual somos obrigados a conviver e, ainda, forçados a mimetizar.

“Crise hídrica”, “plano de contingência”, “obras emergenciais”, “volume morto”, “reservatórios”, tal como vêm sendo usados, não são mais que desvios covardes da linguagem e da política para ocultar o enfrentamento do real.

Não há água, houve grande incompetência, haverá grandes dificuldades, é necessário um plano emergencial de orientação e a criação de redes de contenção e de solidariedade. É preciso construir e distribuir cisternas, caixas d’água para a população carente, ensinar medidas de economia, mobilizar as subprefeituras para ações localizadas e, sobretudo, expor pública e claramente medidas restritivas à grande indústria e à agricultura, que podem ser bem mais perdulárias do que o cidadão.

Mas nada disso se diz ou faz. E por quê? A impressão que tenho é a de que a maioria dos políticos não trabalha sob o regime da responsabilidade –a condição de “destinação ao outro”–, mas sim na forma do “eterno retorno sobre si mesmo”.

Vive-se, em São Paulo, uma situação de absurdo, em que, além das enormes dificuldades cotidianas –deslocamento, saúde, segurança, educação, enchentes, e agora, a de ter água–, ainda é preciso ouvir o presidente da Sabesp dizer que São Pedro “tem errado a pontaria”.

Meu impulso é o de partir para o vocativo: “Ei, presidenta Dilma, deputados federais, governador Alckmin, prefeito Haddad, vereadores! Ouçam! Nós os elegemos para que vocês batalhem por nós, e não por seus mandatos! Nós é que somos aquele, o outro, a quem vocês devem responsabilidade!”.

Ou não tem relação com a “crise hídrica” um deputado federal receber cerca de R$100.000,00 por mês em “verbas de gabinete”? Por que deputados têm direito a um benefício que, entre outros, lhes garante seguro de saúde e carro, se quem ganha muitíssimo menos não tem?

Desafio os deputados, um a um, a abrirem mão publicamente de seus seguros de saúde e a usarem o transporte público para irem ao trabalho –a entrarem no real.

Até quando a população, sobretudo a mais carente, que tem poucos instrumentos para amenizar o que já sofre, vai ser tutelada e oprimida sob o manto eufemístico da “maior seca em 84 anos”?

Queremos o real, a linguagem responsável, que explicita o olhar para o outro e dá sustentação e liberdade para que se possam superar as dificuldades com autonomia.

O eufemismo livra os políticos e aliena a população da chapa maciça do real. Ele representa um estado semelhante à burocracia ineficaz. Como ser responsável se, para cada ação, há infinitas mediações?

O resultado é que as mediações acabam por alimentar muito mais a si mesmas do que ao objetivo final e inicial de governar: ser para o outro –no caso, nós, impotentes diante do que nos obrigam e do que, há meses, nos forçam a presenciar.

NOEMI JAFFE, 52, é doutora em literatura brasileira pela USP e autora de “O que os Cegos Estão Sonhando?” (editora 34)

Indo-European languages emerged roughly 6,500 years ago on Russian steppes, new research suggests (LSA)

2/13/2015

Linguists have long agreed that languages from English to Greek to Hindi, known as ‘Indo-European languages‘, are part of a language family which first emerged from a common ancestor spoken thousands of years ago. Now, a new study gives us more information on when and where it was most likely used. Using data from over 150 languages, linguists at the University of California, Berkeley provide evidence that this ancestor language originated 5,500 – 6,500 years ago, on the Pontic-Caspian steppe stretching from Moldova and Ukraine to Russia and western Kazakhstan.

Ancestry-constrained phylogenetic analysis supports the Indo-European steppe hypothesis“, by Will Chang, Chundra Cathcart, David Hall and Andrew Garrett, will appear in the March issue of the academic journal LanguageA pre-print version of the article is available on the LSA website.

Chang et al. abstract

This article provides new support for the “steppe hypothesis” or “Kurgan hypothesis”, which proposes that Indo-European languages first spread with cultural developments in animal husbandry around 4500 – 3500 BCE. (An alternate theory proposes that they spread much earlier, around 7500 – 6000 BCE, in Anatolia in modern-day Turkey.)

Chang et al. examined over 200 sets of words from living and historical Indo-European languages; after determining how quickly these words changed over time through statistical modeling, they concluded that the rate of change indicated that the languages which first used these words began to diverge approximately 6,500 years ago, in accordance with the steppe hypothesis.

This is one of the first quantitatively-based academic papers in support of the steppe hypothesis, and the first to use a model with “ancestry constraints” which more directly incorporate previously discovered relationships between languages. Discussion of prior studies in favor of and against the steppe hypothesis can be found in the paper.

Members of the media who are interested in discussing the article and its findings may contact Brice Russ, LSA Director of Communications, and Andrew Garrett, Professor of Linguistics at the University of California, Berkeley.

Especialistas criticam problemas no acordo ortográfico (Agência Brasil)

Assunto está em debate na Comissão de Educação do Senado

O professor Pasquale Cipro Neto defendeu nesta quarta-feira (22) revisão no Acordo Ortográfico da Língua Portuguesa. “O texto do acordo é tão cheio de problema que foi preciso a Academia [Brasileira de Letras] publicar nota explicativa [sobre pontos do acordo]. Por que foi preciso isso? Porque há problemas”, ressaltou o professor, ao participar do segundo dia de debates sobre o assunto na Comissão de Educação do Senado.

Segundo Pasquale, o Brasil saiu na frente dos demais países signatários na implementação do acordo impedindo uma adoção simultânea da nova regra. Para ele, houve atropelo e falta de organização do país no processo. “Nós não podemos ir adiante com um texto que carece de polimento, soluções concretas”, disse.

As diversas situações do uso do hífen, considerado pelo professor uma das grandes fragilidades da norma, foi um dos pontos mais criticados. Para Pasquale Neto, no texto do acordo, “o hífen foi maltratado, mal resolvido”. A seu ver, a questão precisa ser solucionada. De acordo com ele, é inexplicável o fato da palavra “pé-de-meia” ser escrita com hífen e “pé de moleque”, não.

Para a professora Stella Maris Bortoni de Figueiredo Ricardo, integrante da Associação Brasileira de Linguística (Abralin), qualquer sugestão de mudança deve ser acordada com os países signatários. “A Abralin recomenda que se consolide o Acordo Ortográfico de 1990, sem que haja nenhuma alteração unilateral. Qualquer alteração que se queira fazer no acordo, que seja feito no âmbito da CPLP  [Comunidade dos Países de Língua Portuguesa] e do Iilp [Instituto Internacional da Língua Portuguesa]”, defendeu.

Para debater as sugestões visando a melhorar o acordo, a Comissão de Educação do Senado criou, em 2013, grupo técnico de trabalho formado pelos professores Ernani Pimentel e Pasquale Cipro Neto, que deverão apresentar uma síntese em março de 2015. Por interferência da comissão, a implantação definitiva foi adiada de janeiro de 2013 para janeiro de 2016 por decreto da presidenta Dilma Rousseff.

Na rodada de ontem (21) o presidente do Centro de Estudos Linguísticos da Língua Portuguesa, Ernani Pimentel, polemizou a discussão ao cobrar maior simplificação gramatical. Ele lidera movimento para adoção de critério fonético na ortografia, ou seja, a escrita das palavras orientada pela forma como se fala. Por esse critério, a palavra “chuva”, por exemplo, seria escrita com x (xuva), sem preocupação em considerar a origem. Para o professor, a simplificação evitaria que as novas gerações sejam submetidas a “regras ultrapassadas que exigem decoreba”.

A sugestão foi rechaçada pelo gramático Evanildo Bechara que considera que a simplificação fonética, “aparentemente ideal”, resultaria em mais problemas que soluções, pois extinguiria as palavras homófonas – aquelas que têm o mesmo som, mas com escrita e significados diferentes. Segundo ele, as palavras seção, sessão e cessão, ficariam reduzidas a uma só grafia – sesão –, o que prejudicaria a compreensão da mensagem. “Aparentemente teríamos resolvido um problema ortográfico, mas criaríamos um problema maior na função da língua, que é a comunicação entre as pessoas”, lembrou.

O gramático avalia que o acordo reúne qualidades e representa um avanço para o uso do idioma e para unificar regras entre os países lusófonos. Ele ressaltou que os países que assinaram o acordo poderão, depois da implementação das novas regras, aprovar modificações e ajustes, caso necessário.

Para o presidente da comissão, senador Cyro Miranda (PSDB-GO), a intenção dos debates não é alterar o acordo, uma vez que, segundo ele, o papel cabe ao Executivo, em entendimento com os demais países signatários. “Nossa obrigação é chamar as pessoas envolvidas para dar opinião. Mas quem toma a frente é o Ministério da Educação e o Ministério de Relações Exteriores. Estamos mostrando as dificuldades e se, for possível, vamos contribuir”, disse.

(Karine Melo / Agência Brasil)

http://agenciabrasil.ebc.com.br/educacao/noticia/2014-10/especialistas-criticam-problemas-no-acordo-ortografico

Saving Native Languages and Culture in Mexico With Computer Games (Indian Country)

Thinkstock

9/21/14

Indigenous children in Mexico can now learn their mother tongues with specialized computer games, helping to prevent the further loss of those languages across the country.

“Three years ago, before we employed these materials, we were on the verge of seeing our children lose our Native languages,” asserted Matilde Hernandez, a teacher in Zitacuaro, Michoacan.

“Now they are speaking and singing in Mazahua as if that had never happened,” Hernandez said, referring to computer software that provides games and lessons in most of the linguistic families of the country including Mazahua, Chinanteco, Nahuatl of Puebla, Tzeltal, Mixteco, Zapateco, Chatino and others.

The new software was created by scientists and educators in two research institutions in Mexico: the Victor Franco Language and Culture Lab (VFLCL) of the Center for Investigations and Higher Studies in Social Anthropology (CIHSSA); and the Computer Center of the National Institute of Astrophysics, Optics and Electronics (NIAOE).

According to reports released this summer, the software was developed as a tool to help counteract the educational lag in indigenous communities and to employ these educational technologies so that the children may learn various subjects in an entertaining manner while reinforcing their Native language and culture.

“This software – divided into three methodologies for three different groups of applications – was made by dedicated researchers who have experience with Indigenous Peoples,” said Dr. Frida Villavicencio, Coordinator of the VLFCL’s Language Lab.

“We must have an impact on the children,” she continued, “offering them better methodologies for learning their mother tongues, as well as for learning Spanish and for supporting their basic education in a fun way.”

Villavicencio pointed out that the games and programs were not translated from the Spanish but were developed in the Native languages with the help of Native speakers. She added that studies from Mexico’s National Institute of Indigenous Languages (NIIL) show that the main reason why indigenous languages disappear, or are in danger of doing so, is because in each generation fewer and fewer of the children speak those languages.

“We need bilingual children only in that way can we preserve their languages,” she added.

Read more at http://indiancountrytodaymedianetwork.com/2014/09/21/saving-native-languages-and-culture-mexico-computer-games-156961

How learning to talk is in the genes (Science Daily)

Date: September 16, 2014

Source: University of Bristol

Summary: Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Scientists discovered a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.


Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Credit: © witthaya / Fotolia

Researchers have found evidence that genetic factors may contribute to the development of language during infancy.

Scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol worked with colleagues around the world to discover a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Children produce words at about 10 to 15 months of age and our range of vocabulary expands as we grow — from around 50 words at 15 to 18 months, 200 words at 18 to 30 months, 14,000 words at six-years-old and then over 50,000 words by the time we leave secondary school.

The researchers found the genetic link during the ages of 15 to 18 months when toddlers typically communicate with single words only before their linguistic skills advance to two-word combinations and more complex grammatical structures.

The results, published in Nature Communications today [16 Sept], shed further light on a specific genetic region on chromosome 3, which has been previously implicated in dyslexia and speech-related disorders.

The ROBO2 gene contains the instructions for making the ROBO2 protein. This protein directs chemicals in brain cells and other neuronal cell formations that may help infants to develop language but also to produce sounds.

The ROBO2 protein also closely interacts with other ROBO proteins that have previously been linked to problems with reading and the storage of speech sounds.

Dr Beate St Pourcain, who jointly led the research with Professor Davey Smith at the MRC Integrative Epidemiology Unit, said: “This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans.”

Dr Claire Haworth, one of the lead authors, based at the University of Warwick, commented: “In this study we found that results using DNA confirm those we get from twin studies about the importance of genetic influences for language development. This is good news as it means that current DNA-based investigations can be used to detect most of the genetic factors that contribute to these early language skills.”

The study was carried out by an international team of scientists from the EArly Genetics and Lifecourse Epidemiology Consortium (EAGLE) and involved data from over 10,000 children.

Journal Reference:
  1. Beate St Pourcain, Rolieke A.M. Cents, Andrew J.O. Whitehouse, Claire M.A. Haworth, Oliver S.P. Davis, Paul F. O’Reilly, Susan Roulstone, Yvonne Wren, Qi W. Ang, Fleur P. Velders, David M. Evans, John P. Kemp, Nicole M. Warrington, Laura Miller, Nicholas J. Timpson, Susan M. Ring, Frank C. Verhulst, Albert Hofman, Fernando Rivadeneira, Emma L. Meaburn, Thomas S. Price, Philip S. Dale, Demetris Pillas, Anneli Yliherva, Alina Rodriguez, Jean Golding, Vincent W.V. Jaddoe, Marjo-Riitta Jarvelin, Robert Plomin, Craig E. Pennell, Henning Tiemeier, George Davey Smith. Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 2014; 5: 4831 DOI:10.1038/ncomms5831

Your Brain on Metaphors (The Chronicle of Higher Education)

September 1, 2014

Neuroscientists test the theory that your body shapes your ideas

Your Brain  on Metaphors 1

Chronicle Review illustration by Scott Seymour

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same.
The syntax is identical.
Does the brain notice, or care,
that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By,the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness.

Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are “visible” and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it isimpossible—not just difficult, but impossible—for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.

Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied—that human concepts are shaped by the physical features of human brains and bodies. “Our physiology provides the concepts for our philosophy,” Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: “The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience.”

Modern brain-scanning technologies make it possible to test such claims empirically. “That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand,” says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.

Neuroscientists agree on what happens with literal sentences like “The player kicked the ball.” The brain reacts as if it were carrying out the described actions. This is called “simulation.” Take the sentence “Harry picked up the glass.” “If you can’t imagine picking up a glass or seeing someone picking up a glass,” Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, “then you can’t understand that sentence.” Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.

But what about metaphorical sentences like “The patient kicked the habit”? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as “stopped”? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.

The evidence says it does. “When you read action-related metaphors,” says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, “you have activation of the motor area of the brain.” In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with “The villain kicked the bucket” (“The villain died”)? What about “The students toed the line” (“The students conformed to the rules”)? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.

The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as “biting off more than you can chew” did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.

In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. “Imaging studies of embodiment in figurative language have not compared idioms and metaphors,” they wrote in a report. “Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors.” Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. “The field is new. The methods need to stabilize,” she says. “There are many different kinds of figurative language, and they may be importantly different from one another.”

Not only that, the nitty-gritty differences of procedure may be important. “All of these studies are carried out with different kinds of linguistic stimuli with different procedures,” Cuccio says. “So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference.”

To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: “When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief.”

But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. “They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics.”

That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take “kick the bucket.” Lakoff offers a theory of what it means using a scene from Young Frankenstein. “Mel Brooks is there and they’ve got the patient dying,” he says. “The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor—that you’re full of life, and life is a fluid. You kick the bucket, and it goes over.”

That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. “You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ ” Lakoff says. “Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything.”

Similarly, Lakoff says, when linguists ask people to write down the idiom “toe the line,” half of them write “tow the line.” That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex—in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process “toe the line.” In that case, averaging subjects together would be misleading.

Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. “How many neurons are there in one pixel or one voxel?” he says. “About 125,000. They’re one point in the picture.” MRI lacks the necessary temporal resolution, too. “What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel.” What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.

Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.

However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, “the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened.” Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, “For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.”

But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. “If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system,” she says. “I don’t know why we have to replicate our physical limitations in other systems.”

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.

Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (arxiv.org/abs/1407.7094).

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”

We speak as we feel, we feel as we speak (Science Daily)

Date: June 26, 2014

Source: University of Cologne – Universität zu Köln

Summary: Ground-breaking experiments have been conduced to uncover the links between language and emotions. Researchers were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa. The authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds.

Researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. Credit: Image courtesy of University of Cologne – Universität zu Köln 

A team of researchers headed by the Erfurt-based psychologist Prof. Ralf Rummer and the Cologne-based phoneticist Prof. Martine Grice has carried out some ground-breaking experiments to uncover the links between language and emotions. They were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa.

The research project looked at the question of whether and to what extent the meaning of words is linked to their sound. The specific focus of the project was on two special cases; the sound of the long ‘i’ vowel and that of the long, closed ‘o’ vowel. Rummer and Grice were particularly interested in finding out whether these vowels tend to occur in words that are positively or negatively charged in terms of emotional impact. For this purpose, they carried out two fundamental experiments, the results of which have now been published in Emotion, the journal of the American Psychological Association.

In the first experiment, the researchers exposed test subjects to film clips designed to put them in a positive or a negative mood and then asked them to make up ten artificial words themselves and to speak these out loud. They found that the artificial words contained significantly more ‘i’s than ‘o’s when the test subjects were in a positive mood. When in a negative mood, however, the test subjects formulated more ‘words’ with ‘o’s.

The second experiment was used to determine whether the different emotional quality of the two vowels can be traced back to the movements of the facial muscles associated with their articulation. Rummer and Grice were inspired by an experimental configuration developed in the 1980s by a team headed by psychologist Fritz Strack. These researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. In the first case, the test subjects were required to place the pen between their teeth and in the second case between their lips. While their zygomaticus major muscle was contracted, the test subjects found the cartoons significantly more amusing. Instead of this ‘pen-in-mouth test’, the team headed by Rummer and Grice now conducted an experiment in which they required their test subjects to articulate an ‘i’ sound (contraction of the zygomaticus major muscle) or an ‘o’ sound (contraction of the orbicularis oris muscle) every second while viewing cartoons. The test subjects producing the ‘i’ sounds found the same cartoons significantly more amusing than those producing the ‘o’ sounds instead.

In view of this outcome, the authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds. And thanks to the results of their two experiments, Rummer and Grice now have an explanation for a much-discussed phenomenon. The tendency for ‘i’ sounds to occur in positively charged words (such as ‘like’) and for ‘o’ sounds to occur in negatively charged words (such as ‘alone’) in many languages appears to be linked to the corresponding use of facial muscles in the articulation of vowels on the one hand and the expression of emotion on the other.

Journal Reference:

  1. Ralf Rummer, Judith Schweppe, René Schlegelmilch, Martine Grice. Mood is linked to vowel type: The role of articulatory movements.Emotion, 2014; 14 (2): 246 DOI: 10.1037/a0035752