Arquivo da tag: Linguagem

USC to keep using word ‘field’ despite departmental ban over slavery ‘connotations’ (Washington Times)

washingtontimes.com

Valerie Richardson

Thursday, January 12, 2023

Southern California quarterback Caleb Williams (13) throws during an NCAA college football practice Tuesday, April 5, 2022, in Los Angeles. Williams isn’t a typical transfer. Rather than adapting to a new coach and an entirely new system, Williams has followed Lincoln Riley from Oklahoma to USC. So he’s playing for the same coach, albeit at a different school. (AP Photo/Marcio Jose Sanchez, File)

The University of Southern California isn’t banning the word “field,” no matter what its School of Social Work may say.

 Elizabeth A. Graddy, interim provost and senior vice president for academic affairs, said Thursday there is no campus prohibition on the use of “field” after an uproar over the USC Suzanne Dworak-Peck School of Social Work’s decision to replace the term with “practicum.”

 “The university does not maintain a list of banned or discouraged words. We will continue to use words – including ‘field’ – that accurately encompass and describe our work and research,” said Ms. Graddy in an email to The Washington Times.

 The School of Social Work was mocked relentlessly after the release of a Jan. 9 memo showing that the department had decided to abolish the word “field” from its curriculum, citing its association with slavery.

 The memo from the “Practicum Education Department” said the change aligns with initiatives including the 2021 National Association of Social Work’s “commitment to undoing racism through social work.”

 “This change supports anti-racist social work practice by replacing language that could be considered anti-Black or anti-immigrant in favor of inclusive language,” said the document. “Language can be powerful, and phrases such as ‘going into the field’ or ‘field work’ may have connotations for descendants of slavery and immigrant workers that are not benign.”

 The memo was apparently posted first on Twitter by Houman David Hemmati, a Los Angeles doctor who studied at crosstown rival UCLA.

 “Is this with merit or empty virtue signaling?” he asked.

 Most responders went with the latter. Comments included “Ridiculous,” “Total insanity,” “For the love of all that’s holy, please make it stop,” and “Are my dreams coming true that I can call a soccer field a pitch?”

 Others pointed out that USC has several large grassy expanses that include the f-word in their names, including Soni McAlister Field, Brittingham Intramural Field, and the Howard Jones Field/Brian Kennedy Field, where the Trojans football team practices.

 “The USC Trojans Come Out of the Locker Room and Line Up on the Practicum,” said a Thursday headline on National Review.

The USC social-work school isn’t alone. The Michigan Department of Health and Human Services said it would discontinue using “field work” and “field worker,” suggesting alternatives such as “community office,” according to a Jan. 4 memo obtained by the Washington Free Beacon.

 USC’s School of Social Work offers advanced degrees that include a Master of Social Work and Master of Science in Nursing, which could be problematic, given that at least one realtors’ association has banned the word “master” over its connection to slavery.

 The departmental memo acknowledged that “changing terminology can be challenging, and a complete transition will take some time, but we thank you in advance for joining in this effort.”

Generation Amazing!!! How We’re Draining Language of Its Power (Literary Hub)

lithub.com

Emily McCrary-Ruiz-Esparza on the “Maxim of Extravagance”

By Emily McCrary-Ruiz-Esparza

September 27, 2022


I noticed it recently when I scheduled my dog for a veterinarian’s appointment. The person who answered the phone was friendly enough and greeted me warmly, and then I made my request.

I’d like to make an appointment for my dog, I said. Wonderful, said the scheduler. June McCrary.  Excellent. She needs an anal gland expression. Fantastic!

I was surprised anyone could be so over the moon to empty my chihuahua’s anal glands—if you google the procedure I’m sure you will be as well—but in a way, grateful too.

When I shared this story with a friend, she told me about a conversation she overheard between two parents at the park. What are your children’s names? one of them said as they watched a pair of boys fight each other for one of those cold metal animals that bobs back and forth. The other responded but my friend didn’t catch the answer. The conversation went on and one side sounded something like this: Really? Amazing. That’s so beautiful. Just beautiful. How did you choose names like that?

Their names: Matthew and David. Fine names. But when you ooze words like amazing and beautiful, I imagine we’re dealing with something like Balthazaar and Tiberius.

We reach for over-the-top words for just about anything. These amazings and wonderfuls and incredibles and fantastics, we throw them around as we once did OKs and thank yous and I can help with thats.

Surreal is another favorite word since the spring of 2020. During the first quarantine, driving through the city in the only car on the road really did feel surreal, so did seeing every business closed, like maybe we were living in a Saramago novel. A grocery store full of masked shoppers circling each other at a wary distance of six feet wasn’t exactly surreal, but it was strange enough, so we used it there too.

Eventually we ran out of places to put the word, and by then we were tired, so driving on the road with other cars became surreal, seeing other people standing close to each other in the grocery store was surreal, not having to wear a mask was surreal. It became a way to describe change, or anything out of the ordinary.

What is it that makes us talk this way? That to express a modicum of emotion, we have to reach for words like fantastic, incredible, unbelievable, and unreal, words meant to convey a certain level of magnitude, but that no longer carry their original weight.The less potent our words are, the more we have to reach for particularly emotive ones to say what we want to say.

Martin Hilpert, who teaches linguistics at the Université de Neuchâtel in Switzerland, told me this is nothing new. “Words with evaluative meanings lose potency as speakers apply them to more and more situations. Toilet paper that is especially soft can be ‘fantastic,’ a train delayed by ten minutes can be ‘a disaster.’”

This occurs in a sort of cycle, which Martin Haspelmath, a comparative linguist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, describes in a handful of steps.

It happens like this: To attract attention, we submit to the “maxim of extravagance.” You really want people to see the taxidermied pig you just bought, so you tell your friend, “Man, this thing is incredible. It’s wearing a lederhosen and everything.” Your friend goes to see the pig and he too is surprised by the thing. He starts telling his friends, “that thing is incredible.” This is called “conformity.” Word gets around the neighborhood and then the whole block is talking about the incredible taxidermied pig. This is called “frequency.” You’re out for a walk one day, and you flag down a Door Dasher on a bicycle. “Have you seen the—” “The incredible taxidermied pig? Yeah man, whatever.” This is called “predictability.”

Predictability is useful when we want to fit in with the crowd, but it’s not useful if we want to attract attention, which you need at this point, because you’ve started charging admission to see the pig. Now you need to innovate, and you’re back to the maxim of extravagance again, so the pig becomes unbelievable.

A pop-linguistic term for this is “semantic bleaching,” like staining all the color out of our words, and it happens with overuse. Another way to describe it is supply and demand. When we use a word too much and there are too many excellents and beautifuls floating around, each becomes less valuable.

Bleaching has a circular relationship with hyperbole. The less potent our words are, the more we have to reach for particularly emotive ones to say what we want to say, and we climb a crowded ladder to a place where all words are wispy and white and no one is really saying anything at all. That’s how anal gland expressions become fantastic and ordinary names like David and Matthew become amazing.

Writers and thinkers have many times over made the case that stale language is both a symptom and cause of the deterioration of critical thought. George Orwell, famously, for one. He writes in “Politics and the English Language” that a speaker who uses tired language has “gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself.”

There is a certain point when turns of phrase are so out of fashion they become fresh again. Orwell’s dying metaphors of the 1940s were take up the cudgel for and ring the changes on, which would feel interesting now. Ours are full-throated and deep dive and unpack and dig in and at the end of the day.

I contacted several academics for the writing of this essay and asked them whether the new abundance of communication accelerates the exhaustion of words. They insisted that there isn’t more communication going on now than in the past, it’s just more visible. ­If we’re talking this much, it might be that we’re desperate to exist. If we’re slinging around words like amazing and incredible and surreal, it might be that we’re looking for these things.

I don’t believe this is true. The overwhelming quantity of means we have for talking to each other, and the fact that we’re using them, tells me there is more communication. There are some friends I talk to daily because we share a text thread. I wouldn’t be calling all five of them every day otherwise. I can watch two people berate each other in the comments section of a Washington Post article about soup, two people that, thirty years ago, would never get the chance to come to blows over curry.

Language is adapted and spread through exposure, so of course change is accelerating. In the same way clothes fall in and out of fashion at shorter intervals now, because of social media and all our instant global connectedness, so do our words.

The fields of linguistics, anthropology, and English are full of hyperbole stans who go to great lengths to make the case for its value and importance. They call it “the master trope,” “the trope of tropes,” “a generator of thought and meaning,” “a tool of philosophical and religious inquiry,” “ an act of becoming,” and “a propelling toward transcendence from an eminent exigency.”

In a paper titled “Recovering Hyperbole: Rethinking the Limits of Rhetoric for an Age of Excess,” the scholar Joshua R. Ritter argues the prescience of hyperbole. For Ritter, hyperbole reflects an innate desire for understanding. He calls it “one of the most effective ways of trying to express the often confounding and inexpressible positions that characterize the litigious discussions of impossibility.”

Ritter also cites Saint Anselm of Canterbury, who believed that the way humans describe God is the archetypal example of hyperbole—it’s everything that cannot be understood, but we do our best to understand anyway.

“It dramatically holds the real and the ideal in irresolvable tension and reveals the impossible distance between the ineptitude and the infinite multiplicity of language to describe what is indescribable,” Ritter writes.

We may be often confounded, but we are hardly ever without something to say. The internet, the great proliferator of communication, incentivizes no one to be speechless. If you’re not talking, you’re not there, so the more frequently you speak, the more real you are. Stop talking and you disappear.

If we’re talking this much, it might be that we’re desperate to exist. If we’re slinging around words like amazing and incredible and surreal, it might be that we’re looking for these things. If we are Generation Hyperbole, it is because we are so desperate to feel something good and tremendous—we’re constantly reaching for something beyond. We want to feel awed, we want to be in touch with something dreamlike, we want to see things that are really beautiful, we’ve only forgotten where to find them. But we’re looking for meaning, you can see it in our language. Even Orwell believed “that the decadence of our language is probably curable.”

Global connectedness means we’re witness to terrible things on a terrible scale, and we share an inadequate language to understand it. We need to feel, even if that feeling is pain, and we need to know that we’re not alone in the feeling. If tragedy is now commonplace, why can’t truly excellent things, amazing things, fantastic things too become commonplace?

Ritter writes:

Once a perplexing and sometimes disturbing disorienting perception occurs, this vertige de l’hyperbole as Baudelaire refers to it, one is ready for a perspectival reorientation—a paradoxical movement leading toward insight and partial apprehension. By generating confusion through excess, hyperbole alters and creates meaning.

Thousands of Chimp Vocal Recordings Reveal a Hidden Language We Never Knew About (Science Alert)

sciencealert.com

PETER DOCKRILL

24 MAY 2022


A common chimpanzee vocalizing. (Andyworks/Getty Images)

We humans like to think our mastery of language sets us apart from the communication abilities of other animals, but an eye-opening new analysis of chimpanzees might force a rethink on just how unique our powers of speech really are.

In a new study, researchers analyzed almost 5,000 recordings of wild adult chimpanzee calls in Taï National Park in Côte d’Ivoire (aka Ivory Coast).

When they examined the structure of the calls captured on the recordings, they were surprised to find 390 unique vocal sequences – much like different kinds of sentences, assembled from combinations of different call types.

Compared to the virtually endless possibilities of human sentence construction, 390 distinct sequences might not sound overly verbose.

Yet, until now, nobody really knew that non-human primates had so many different things to say to each other – because we’ve never quantified their communication capabilities to such a thorough extent.

“Our findings highlight a vocal communication system in chimpanzees that is much more complex and structured than previously thought,” says animal researcher Tatiana Bortolato from the Max Planck Institute for Evolutionary Anthropology in Germany.

In the study, the researchers wanted to measure how chimpanzees combine single-use calls into sequences, order those calls within the sequences, and recombine independent sequences into even longer sequences.

While call combinations of chimpanzees have been studied before, until now the sequences that make up their whole vocal repertoire had never been subjected to a broad quantitative analysis.

To rectify this, the team captured 900 hours of vocal recordings made by 46 wild mature western chimpanzees (Pan troglodytes verus), belonging to three different chimp communities in Taï National Park.

In analyzing the vocalizations, the researchers identified how vocal calls could be uttered singularly, combined in two-unit sequences (bigrams), or three-unit sequences (trigrams). They also mapped networks of how these utterances were combined, as well as examining how different kinds of frequent vocalizations were ordered and recombined (for example, bigrams within trigrams).

In total, 12 different call types were identified (including grunts, pants, hoos, barks, screams, and whimpers, among others), which appeared to mean different things, depending on how they were used, but also upon the context in which the communication took place.

“Single grunts, for example, are predominantly emitted at food, whereas panted grunts are predominantly emitted as a submissive greeting vocalization,” the researchers explain in their paper, led by co-first authors Cédric Girard-Buttoz and Emiliano Zaccarella.

“Single hoos are emitted to threats, but panted hoos are used in inter-party communication.”

In total, the researchers found these different kinds of calls could be combined in various ways to make up 390 different kinds of sequences, which they say may actually be an underestimation, given new vocalization sequences were still being found as the researchers hit their limit of field recordings.

Even so, the data so far suggest chimpanzee communication is much more complex than we realized, which has implications for the sophistication of meanings generated in their utterances (as well as giving new clues into the origins of human language).

“The chimpanzee vocal system, consisting of 12 call types used flexibly as single units, or within bigrams, trigrams or longer sequences, offers the potential to encode hundreds of different meanings,” the researchers write.

“Whilst this possibility is substantially less than the infinite number of different meanings that can be generated by human language, it nonetheless offers a structure that goes beyond that traditionally considered likely in primate systems.”

The next step, the team says, will be to record even larger datasets of chimpanzee calls, to try to assess just how the diversity and ordering of uttered sequences relates to versatile meaning generation, which wasn’t considered in this study.

There’s lots more to be said, in other words – by both chimpanzees and scientists alike.

“This is the first study in a larger project,” explains senior author Catherine Crockford, a director of research at the Institute for Cognitive Science at CNRS, in France.

“By studying the rich complexity of the vocal sequences of wild chimpanzees, a socially complex species like humans, we expect to bring fresh insight into understanding where we come from and how our unique language evolved.”

The findings are reported in Communications Biology.

Another tool in the fight against climate change: storytelling (MIT Technology Review)

technologyreview.com

Stories may be the most overlooked climate solution of all. By

December 23, 2021

Devi Lockwood

There is a lot of shouting about climate change, especially in North America and Europe. This makes it easy for the rest of the world to fall into a kind of silence—for Westerners to assume that they have nothing to add and should let the so-called “experts” speak. But we all need to be talking about climate change and amplifying the voices of those suffering the most. 

Climate science is crucial, but by contextualizing that science with the stories of people actively experiencing climate change, we can begin to think more creatively about technological solutions.

This needs to happen not only at major international gatherings like COP26, but also in an everyday way. In any powerful rooms where decisions are made, there should be people who can speak firsthand about the climate crisis. Storytelling is an intervention into climate silence, an invitation to use the ancient human technology of connecting through language and narrative to counteract inaction. It is a way to get often powerless voices into powerful rooms. 

That’s what I attempted to do by documenting stories of people already experiencing the effects of a climate in crisis. 

In 2013, I was living in Boston during the marathon bombing. The city was put on lockdown, and when it lifted, all I wanted was to go outside: to walk and breathe and hear the sounds of other people. I needed to connect, to remind myself that not everyone is murderous. In a fit of inspiration, I cut open a broccoli box and wrote “Open call for stories” in Sharpie. 

I wore the cardboard sign around my neck. People mostly stared. But some approached me. Once I started listening to strangers, I didn’t want to stop. 

That summer, I rode my bicycle down the Mississippi River on a mission to listen to any stories that people had to share. I brought the sign with me. One story was so sticky that I couldn’t stop thinking about it for months, and it ultimately set me off on a trip around the world.

“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane. I couldn’t imagine living anywhere else.” 

I met 57-year-old Franny Connetti 80 miles south of New Orleans, when I stopped in front of her office to check the air in my tires; she invited me in to get out of the afternoon sun. Franny shared her lunch of fried shrimp with me. Between bites she told me how Hurricane Isaac had washed away her home and her neighborhood in 2012. 

Despite that tragedy, she and her husband moved back to their plot of land, in a mobile home, just a few months after the storm.

“We fight for the protection of our levees. We fight for our marsh every time we have a hurricane,” she told me. “I couldn’t imagine living anywhere else.” 

Twenty miles ahead, I could see where the ocean lapped over the road at high tide. “Water on Road,” an orange sign read. Locals jokingly refer to the endpoint of Louisiana State Highway 23 as “The End of the World.” Imagining the road I had been biking underwater was chilling.

Devi with sign
The author at Monasavu Dam in Fiji in 2014.

Here was one front line of climate change, one story. What would it mean, I wondered, to put this in dialogue with stories from other parts of the world—from other front lines with localized impacts that were experienced through water? My goal became to listen to and amplify those stories.

Water is how most of the world will experience climate change. It’s not a human construct, like a degree Celsius. It’s something we acutely see and feel. When there’s not enough water, crops die, fires rage, and people thirst. When there’s too much, water becomes a destructive force, washing away homes and businesses and lives. It’s almost always easier to talk about water than to talk about climate change. But the two are deeply intertwined.

I also set out to address another problem: the language we use to discuss climate change is often abstract and inaccessible. We hear about feet of sea-level rise or parts per million of carbon dioxide in the atmosphere, but what does this really mean for people’s everyday lives? I thought storytelling might bridge this divide. 

One of the first stops on my journey was Tuvalu, a low-lying coral atoll nation in the South Pacific, 585 miles south of the equator. Home to around 10,000 people, Tuvalu is on track to become uninhabitable in my lifetime. 

In 2014 Tauala Katea, a meteorologist, opened his computer to show me an image of a recent flood on one island. Seawater had bubbled up under the ground near where we were sitting. “This is what climate change looks like,” he said. 

“In 2000, Tuvaluans living in the outer islands noticed that their taro and pulaka crops were suffering,” he said. “The root crops seemed rotten, and the size was getting smaller and smaller.” Taro and pulaka, two starchy staples of Tuvaluan cuisine, are grown in pits dug underground. 

Tauala and his team traveled to the outer islands to take soil samples. The culprit was saltwater intrusion linked to sea-level rise. The seas have been rising four millimeters per year since measurements began in the early 1990s. While that might sound like a small amount, this change has a dramatic impact on Tuvaluans’ access to drinking water. The highest point is only 13 feet above sea level.

A lot has changed in Tuvalu as a result. The freshwater lens, a layer of groundwater that floats above denser seawater, has become salty and contaminated. Thatched roofs and freshwater wells are now a thing of the past. Each home now has a water tank attached to a corrugated-­iron roof by a gutter. All the water for washing, cooking, and drinking now comes from the rain. This rainwater is boiled for drinking and used to wash clothes and dishes, as well as for bathing. The wells have been repurposed as trash heaps. 

At times, families have to make tough decisions about how to allocate water. Angelina, a mother of three, told me that during a drought  a few years ago, her middle daughter, Siulai, was only a few months old. She, her husband, and their oldest daughter could swim in the sea to wash themselves and their clothes. “We only saved water to drink and cook,” she said. But her newborn’s skin was too delicate to bathe in the ocean. The salt water would give her a horrible rash. That meant Angelina had to decide between having water to drink and to bathe her child.

The stories I heard about water and climate change in Tuvalu reflected a sharp division along generational lines. Tuvaluans my age—like Angelina—don’t see their future on the islands and are applying for visas to live in New Zealand. Older Tuvaluans see climate change as an act of God and told me they couldn’t imagine living anywhere else; they didn’t want to leave the bones of their ancestors, which were buried in their front yards. Some things just cannot be moved. 

Organizations like the United Nations Development Programme are working to address climate change in Tuvalu by building seawalls and community water tanks. Ultimately these adaptations seem to be prolonging the inevitable. It is likely that within my lifetime, many Tuvaluans will be forced to call somewhere else home. 

Tuvalu shows how climate change exacerbates both food and water insecurity—and how that insecurity drives migration. I saw this in many other places. Mess with the amount of water available in one location, and people will move.

In Thailand I met a modern dancer named Sun who moved to Bangkok from the rural north. He relocated to the city in part to practice his art, but also to take refuge from unpredictable rain patterns. Farming in Thailand is governed by the seasonal monsoons, which dump rain, fill river basins, and irrigate crops from roughly May to September. Or at least they used to. When we spoke in late May 2016, it was dry in Thailand. The rains were delayed. Water levels in the country’s biggest dams plummeted to less than 10% of their capacity—the worst drought in two decades.

“Right now it’s supposed to be the beginning of the rainy season, but there is no rain,” Sun told me. “How can I say it? I think the balance of the weather is changing. Some parts have a lot of rain, but some parts have none.” He leaned back in his chair, moving his hands like a fulcrum scale to express the imbalance. “That is the problem. The people who used to be farmers have to come to Bangkok because they want money and they want work,” he said. “There is no more work because of the weather.” 

family under sign in Nunavut
A family celebrates Nunavut Day near the waterfront in Igloolik, Nunavut, in 2018.

Migration to the city, in other words, is hastened by the rain. Any tech-driven climate solutions that fail to address climate migration—so central to the personal experience of Sun and many others in his generation around the world—will be at best incomplete, and at worst potentially dangerous. Solutions that address only one region, for example, could exacerbate migration pressures in another. 

I heard stories about climate-­driven food and water insecurity in the Arctic, too. Igloolik, Nunavut, 1,400 miles south of the North Pole, is a community of 1,700 people. Marie Airut, a 71-year-old elder, lives by the water. We spoke in her living room over cups of black tea.

“My husband died recently,” she told me. But when he was alive, they went hunting together in every season; it was their main source of food. “I’m not going to tell you what I don’t know. I’m going to tell you only the things that I have seen,” she said. In the 1970s and ’80s, the seal holes would open in late June, an ideal time for hunting baby seals. “But now if I try to go out hunting at the end of June, the holes are very big and the ice is really thin,” Marie told me. “The ice is melting too fast. It doesn’t melt from the top; it melts from the bottom.”

When the water is warmer, animals change their movement. Igloolik has always been known for its walrus hunting. But in recent years, hunters have had trouble reaching the animals. “I don’t think I can reach them anymore, unless you have 70 gallons of gas. They are that far now, because the ice is melting so fast,” Marie said. “It used to take us half a day to find walrus in the summer, but now if I go out with my boys, it would probably take us two days to get some walrus meat for the winter.” 

Marie and her family used to make fermented walrus every year, “but this year I told my sons we’re not going walrus hunting,” she said. “They are too far.”

Devi Lockwood is the Ideas editor at Rest of World and the author of 1,001 Voices on Climate Change.

The Water issue

This story was part of our January 2022 issue

Do You Know the Story Behind Naming Storms? (Word Genius)

wordgenius.com


Friday, October 29, 2020

Can you imagine turning on the Weather Channel to get an update on Storm 34B-SQ59? While major storms aren’t sentient beings, it’s become standard to give them human names to make it easier to communicate about them, especially during critical news updates. From Hurricane Elsa to Tropical Storm Cristobal, there’s an intriguing legacy behind naming storms.

The History of Naming Storms

A few hundred years ago, storms were named after the Catholic saint’s day that lined up with the storm. For example, Hurricane Santa Ana landed in Puerto Rico on July 26, 1825. But if storms hit on the same day in different years, names doubled up. Hurricane San Felipe I struck Puerto Rico on September 13, 1876 and then San Felipe II hit in 1928.

In the late 19th century, Australian meteorologist Clement Wragge began using women’s names for tropical storms. The practice was adopted by the U.S. Navy and Air Force during World War II when latitude and longitude identifications proved to be too cumbersome.

Outside of the military, early 20th century storms were named and tracked by the year and order, with names such as “1940 Hurricane Two” and “1932 Tropical Storm Six.” This created some confusion when multiple storms were happening during the same time, especially during news broadcasts. To reduce confusion, United States weather services also began using female names for storms in 1953, and later added male names to the list in 1978. This began the modern version of how we name storms.

Who Is in Charge of Storm Names?

Although NOAA’s (National Oceanic and Atmospheric Administration) National Hurricane Center is the premier source for news about storms, this organization does not name them. Instead, the World Meteorological Organization does. The WMO is a specialized agency of the United Nations, headquartered in Switzerland, that focuses on weather, climate, and water resources. Each year, the WMO creates a list of potential names for the upcoming storm season.

Where Do the Names Come From?

There is a bit of an art to naming modern-day storms. The WMO compiles six lists of names for each of the three basins under its jurisdiction: Atlantic, Eastern North Pacific, and Central North Pacific. Countries outside of this jurisdiction have their own naming conventions. For areas within the WMO, such as the United States, storm names are cycled through every six years. That means that the list of names for the 2021 season will be used again in 2027.

Each list contains 21 names that begin with a different letter of the alphabet (minus Q, U, X, Y, Z because of the limited number of names). For the Atlantic basin, names are typically chosen from English, French, and Spanish, because the countries impacted primarily speak one of those three languages. While the names are supposedly random, there are some pop culture-related coincidences, such as 2021’s Hurricane Elsa.

When Is a Storm Named?

A tropical storm can be named once it meets two criteria: a circular rotation and wind speeds more than 39 MPH. Once a storm reaches 74 MPH, it becomes a hurricane but keeps the same name it was first given as a tropical storm, such as when Tropical Storm Larry turned into Hurricane Larry in September 2021.

Hurricane names can also be retired, and this is often done when a hurricane is especially destructive. As of the 2020 season, there are 93 names on the retired Atlantic hurricane list, including 2004’s Katrina, 2012’s Sandy, and 2016’s Matthew. When a name is retired, it is replaced with a new name.

New Rules in 2021

Before the 2021 season, if the full list of storm names was used before the end of the season, any additional storms that reached the necessary criteria for naming would use the Greek alphabet — Alpha, Beta, Gamma, etc. There were 30 named storms in 2020, only the second time the full list of names had been used.

As of 2021, the WMO will use a supplementary list of names, similar to the original list (starting with Adria and ending with Will). The WMO felt that the Greek names were too distracting. From a technical perspective, the Greek names could also not be replaced in a way that made sense if they were retired (such as Eta and Iota in 2020).

Featured image credit: Julia_Sudnitskaya/ iStock

Words Have Lost Their Common Meaning (The Atlantic)

theatlantic.com

John McWhorter, contributing writer at The Atlantic and professor at Columbia University

March 31, 2021


The word racism, among others, has become maddeningly confusing in current usage.

An illustration of quotation marks and the United States split in two.
Adam Maida / The Atlantic

Has American society ever been in less basic agreement on what so many important words actually mean? Terms we use daily mean such different things to different people that communication is often blunted considerably, and sometimes even thwarted entirely. The gap between how the initiated express their ideological beliefs and how everyone else does seems larger than ever.

The word racism has become almost maddeningly confusing in current usage. It tempts a linguist such as me to contravene the dictum that trying to influence the course of language change is futile.

Racism began as a reference to personal prejudice, but in the 1960s was extended via metaphor to society, the idea being that a society riven with disparities according to race was itself a racist one. This convention, implying that something as abstract as a society can be racist, has always felt tricky, best communicated in sociology classes or careful discussions.

To be sure, the idea that disparities between white and Black people are due to injustices against Black people—either racist sentiment or large-scale results of racist neglect—seems as plain as day to some, especially in academia. However, after 50 years, this usage of racism has yet to stop occasioning controversy; witness the outcry when Merriam-Webster recently altered its definition of the word to acknowledge the “systemic” aspect. This controversy endures for two reasons.

First, the idea that all racial disparities are due to injustice may imply that mere cultural differences do not exist. The rarity of the Black oboist may be due simply to Black Americans not having much interest in the oboe—hardly a character flaw or evidence of some inadequacy—as opposed to subtly racist attitudes among music teachers or even the thinness of musical education in public schools. Second, the concept of systemic racism elides or downplays that disparities can also persist because of racism in the past, no longer in operation and thus difficult to “address.”

Two real-world examples of strained usage come to mind. Opponents of the modern filibuster have taken to calling it “racist” because it has been used for racist ends. This implies a kind of contamination, a rather unsophisticated perspective given that this “racist” practice has been readily supported by noted non-racists such as Barack Obama (before he changed his mind on the matter). Similar is the idea that standardized tests are “racist” because Black kids often don’t do as well on them as white kids. If the tests’ content is biased toward knowledge that white kids are more likely to have, that complaint may be justified. Otherwise, factors beyond the tests themselves, such as literacy in the home, whether children are tested throughout childhood, how plugged in their parents are to test-prep opportunities, and subtle attitudes toward school and the printed page, likely explain why some groups might be less prepared to excel at them.

Dictionaries are correct to incorporate the societal usage of racism, because it is now common coin. The lexicographer describes rather than prescribes. However, its enshrinement in dictionaries leaves its unwieldiness intact, just as a pretty map can include a road full of potholes that suddenly becomes one-way at a dangerous curve. Nearly every designation of someone or something as “racist” in modern America raises legitimate questions, and leaves so many legions of people confused or irritated that no one can responsibly dismiss all of this confusion and irritation as mere, well, racism.

To speak English is to know the difference between pairs of words that might as well be the same one: entrance and entry. Awesome and awful are similar. However, one might easily feel less confident about the difference between equality and equity, in the way that today’s crusaders use the word in diversity, equity, and inclusion.

In this usage, equity is not a mere alternate word for equality, but harbors an assumption: that where the races are not represented roughly according to their presence in the population, the reason must be a manifestation of (societal) racism. A teachers’ conference in Washington State last year included a presentation underlining: “If you conclude that outcomes differences by demographic subgroup are a result of anything other than a broken system, that is, by definition, bigotry.” A DEI facilitator specifies that “equity is not an outcome”—in the way equality is—but “a process that begins by acknowledging [people’s] unequal starting place and makes a commitment to correct and address the imbalance.”

Equality is a state, an outcome—but equity, a word that sounds just like it and has a closely related meaning, is a commitment and effort, designed to create equality. That is a nuance of a kind usually encountered in graduate seminars about the precise definitions of concepts such as freedom. It will throw or even turn off those disinclined to attend that closely: Fondness for exegesis will forever be thinly distributed among humans.

Many will thus feel that the society around them has enough “equalness”—i.e., what equity sounds like—such that what they may see as attempts to force more of it via set-aside policies will seem draconian rather than just. The subtle difference between equality and equity will always require flagging, which will only ever be so effective.

The nature of how words change, compounded by the effects of our social-media bubbles, means that many vocal people on the left now use social justice as a stand-in for justice—in the same way we say advance planning instead of planning or 12 midnight instead of midnight—as if the social part were a mere redundant, rhetorical decoration upon the keystone notion of justice. An advocacy group for wellness and nutrition titled one of its messages “In the name of social justice, food security and human dignity,” but within the text refers simply to “justice” and “injustice,” without the social prefix, as if social justice is simply justice incarnate. The World Social Justice Day project includes more tersely named efforts such as “Task Force on Justice” and “Justice for All.” Baked into this is a tacit conflation of social justice with justice conceived more broadly.

However, this usage of the term social justice is typically based on a very particular set of commitments especially influential in this moment: that all white people must view society as founded upon racist discrimination, such that all white people are complicit in white supremacy, requiring the forcing through of equity in suspension of usual standards of qualification or sometimes even logic (math is racist). A view of justice this peculiar, specific, and even revolutionary is an implausible substitute for millennia of discussion about the nature of the good, much less its apotheosis.

What to do? I suggest—albeit with little hope—that the terms social justice and equity be used, or at least heard, as the proposals that they are. Otherwise, Americans are in for decades of non-conversations based on greatly different visions of what justice and equ(al)ity are.

I suspect that the way the term racism is used is too entrenched to yield to anyone’s preferences. However, if I could wave a magic wand, Americans would go back to using racism to refer to personal sentiment, while we would phase out so hopelessly confusing a term as societal racism.

I would replace it with societal disparities, with a slot open afterward for according to race, or according to immigration status, or what have you. Inevitably, the sole term societal disparities would conventionalize as referring to race-related disparities. However, even this would avoid the endless distractions caused by using the same term—racism—for both prejudice and faceless, albeit pernicious, inequities.

My proposals qualify, indeed, as modest. I suspect that certain people will continue to use social justice as if they have figured out a concept that proved elusive from Plato through Kant through Rawls. Equity will continue to be refracted through that impression. Legions will still either struggle to process racism both harbored by persons and instantiated by a society, or just quietly accept the conflation to avoid making waves.

What all of this will mean is a debate about race in which our problem-solving is hindered by the fact that we too often lack a common language for discussing the topic.

John McWhorter is a contributing writer at The Atlantic. He teaches linguistics at Columbia University, hosts the podcast Lexicon Valley, and is the author of the upcoming Nine Nasty Words: English in the Gutter Then, Now and Always.

Cavani, jogador de futebol, acusado de racismo na Inglaterra por uso de expressão coloquial uruguaia, em espanhol, nas redes sociais: o racismo sistêmico nos usos da língua no Uruguai

Academia Uruguaia de Letras defende Cavani em caso de suposto racismo e lamenta ‘falta de conhecimento’ de federação inglesa (O Globo)

O Globo, com Reuters – 02 de janeiro de 2021


Jogador foi punido por ter usado termo ‘negrito’ em sua rede social, ao agradecer a um amigo que lhe deu os parabéns depois da vitória contra Southampton

02/01/2021 – 10:33 / Atualizado em 02/01/2021 – 11:12

Cavani, do Manchester United, foi punido com multa e suspensão de três jogos Foto: MARTIN RICKETT / Pool via REUTERS
Cavani, do Manchester United, foi punido com multa e suspensão de três jogos Foto: MARTIN RICKETT / Pool via REUTERS

A Academia de Letras do Uruguai classificou nesta sexta-feira como “ignorante” e uma “grave injustiça”a punição de três jogos recebida pelo atacante Edinson Cavani, do Manchester United, aplicada pela  Football Association (FA), entidade máxima do futebol inglês, por uso do termo “negrito” para se referir a um seguidor em uma postagem numa rede social.

O uruguaio de 33 anos usou a palavra “negrito” em um post no Instagram após a vitória do clube sobre o Southampton em 29 de novembro, antes de retirá-lo do ar e se desculpar. Ele disse que era uma expressão de afeto a um amigo.

Postagem de Cavani que gerou polêmica Foto: Reproduçao
Postagem de Cavani que gerou polêmica Foto: Reproduçao

Na quinta-feira, a FA disse que o comentário era “impróprio e trouxe descrédito ao jogo” e multou Cavani em 100 mil.

A academia, uma associação dedicada a proteger e promover o espanhol usado no Uruguai, disse que “rejeitou energicamente a sanção”.

“A Federação Inglesa de Futebol cometeu uma grave injustiça com o desportista uruguaio … e mostrou a sua ignorância e erro ao regulamentar o uso da língua, em particular o espanhol, sem dar atenção a todas as suas complexidades e contextos”, afirmou a academia, por meio de seu presidente, Wilfredo Penco. “No contexto em que foi escrito, o único valor que se pode dar ao negrito (e principalmente pelo uso diminutivo) é afetuoso”.

Segundo a Academia, palavras que se referem à cor da pele, peso e outras características físicas são freqüentemente usadas entre amigos e parentes na América Latina, especialmente no diminutivo. A entidade acrescenta que até pessoas alvo destas expressões muitas vezes nem tem as características citadas.

“O uso que Cavani fez para se dirigir ao amigo ‘pablofer2222’ (nome da conta) tem este tipo de teor carinhoso — dado o contexto em que foi escrito, a pessoa a quem foi dirigido e a variedade do espanhol usado, o único valor que “negrito” pode ter é o carinhoso. Para insultar em espanhol, inglês ou outra língua, é preciso ter a capacidade para ofender o outro e aí o próprio ‘pablofer2222’ teria expressado o seu incómodo”, encerra a Academia.

Cavani: “Meu coração está em paz”

Cavani usou a rede social para comentar o episódio e assumiu “desconforto” com a situação. Garantiu que nunca foi sua intenção ofender o amigo e que a expressão usada foi de afeto.

“Não quero me alongar muito neste momento desconfortável. Quero dizer que aceito a sanção disciplinar, sabendo que sou estrangeiro para os costumes da língua inglesa, mas que não partilho do mesmo ponto de vista. Peço desculpa se ofendi alguém com uma expressão de afeto para com um amigo, não era essa a minha intenção. Aqueles que me conhecem sabem que os meus esforços são sempre procurar a simples alegria e amizade”, escreveu o jogador.

“Agradeço as inúmeras mensagens de apoio e afeto. O meu coração está em paz porque sei que sempre me expressei com afeto de acordo com a minha cultura e estilo de vida. Um sincero abraço”.


oglobo.globo.com

Cavani: Federação uruguaia e jogadores da seleção defendem atacante e pedem revisão de pena por racismo (O Globo)

Jogador foi suspenso por três partidas e multado pela Football Association por escrever ‘Negrito’ em suas redes sociais

04/01/2021 – 12:46 / Atualizado em 04/01/2021 – 13:45

Cavani foi suspenso por três jogos pela Federação Inglesa acusado de racismo Foto: MARTIN RICKETT/ Pool via REUTERS
Cavani foi suspenso por três jogos pela Federação Inglesa acusado de racismo Foto: MARTIN RICKETT/ Pool via REUTERS

A punição imposta a Edinson Cavani pela Football Association (FA, entidade que gere o futebol na Inglaterra) pela reprodução do termo “Negrito” (diminutivo de negro, em espanhol) em suas redes sociais segue no centro de uma intensa discussão no Uruguai. Depois da Academia Uruguaia de Letras prestar solidariedade e chamar a pena de desconhecimento cultural, os jogadores da seleção e a própria Associação Uruguaia de Futebol (AUF) se manifestaram em favor do atacante.

Nesta segunda, a Associação de Futebolistas do Uruguai publicou uma carta na qual manifestou seu repúdio à decisão da FA. O documento classifica a punição como uma arbitrariedade e diz que a entidade teve uma visão distorcida, dogmática e etnocentrista do tema.

“Longe de realizar uma defesa contra o racismo, o que a FA cometeu foi um ato discriminatório contra a cultura e a forma de vida dos uruguaios”, acusa o órgão que representa a classe de jogadores do país sul-americano.

O documento foi compartilhado nas redes sociais por jogadores da seleção. Entre eles, o atacante Luis Suárez, do Atlético de Madri; e o capitão Diego Godín, zagueiro do Cagliari-ITA.

Logo em seguida, a própria federação uruguaia se juntou à rede de apoio ao atacante e ídolo da Celeste. Em comunicado divulgado em suas redes sociais, a entidade pede que a FA retire a pena imposta a Cavani e reitera a argumentação utilizada pela Academia Uruguaia de Letras ao tentar desassociar o termo “negrito” de qualquer conotação racista.

“No nosso espanhol, que difere muito do castelhano falado em outras regiões do mundo, os apelidos negro/a e negrito/a são utilizados assiduamente como expressão de amizade, afeto, proximidade e confiança e de forma alguma se referem de forma depreciativa ou discriminatória à raça ou cor da pele de quem se faz alusão”, defende o órgão.

Cavani já cumpriu o primeiro dos três jogos que recebeu de suspensão. Ele não foi relacionado para a partida do Manchester United contra o Aston Villa, no último sábado, pelo Campeonato Inglês. Além deste gancho, o jogador foi condenado a pagar uma multa de 100 mil libras (cerca de R$ 700 mil). A punição foi dada após ele escrever “Obrigado, negrito” a um elogio feito por um seguidor do Instagram.

“Um comentário postado na página Instagram do jogador do Manchester United foi insultuoso e/ou abusivo e/ou impróprio e/ou trouxe descrédito ao jogo”, posicionou-se a FA ao aplicar a pena.

Embora o episódio tenha gerado muita indignação no Uruguai, país de maioria branca, o próprio Cavani não levou o caso adiante. Ao se manifestar, o atacante se disse incomodado com a situação, não concordou com a punição, mas enfatizou que a aceitava.

How a Famous Harvard Professor Became a Target Over His Tweets (New York Times)

nytimes.com

By Michael Powell, July 15, 2020

The outcry over free speech and race takes aim at Steven Pinker, the best-selling author and well-known scholar.

Professor Steven Pinker, in his office in Cambridge, Mass., in 2018. He has been accused of racial insensitivity by people he describes as “speech police.”
Credit…Kayana Szymczak for The New York Times

Steven Pinker occupies a role that is rare in American life: the celebrity intellectual. The Harvard professor pops up on outlets from PBS to the Joe Rogan podcast, translating dense subjects into accessible ideas with enthusiasm. Bill Gates called his most recent book “my new favorite book of all time.”

So when more than 550 academics recently signed a letter seeking to remove him from the list of “distinguished fellows” of the Linguistic Society of America, it drew attention to their provocative charge: that Professor Pinker minimizes racial injustices and drowns out the voices of those who suffer sexist and racist indignities.

But the letter was striking for another reason: It took aim not at Professor Pinker’s scholarly work but at six of his tweets dating back to 2014, and at a two-word phrase he used in a 2011 book about a centuries-long decline in violence.

“Dr. Pinker has a history of speaking over genuine grievances and downplaying injustices, frequently by misrepresenting facts, and at the exact moments when Black and Brown people are mobilizing against systemic racism and for crucial changes,” their letter stated.

The linguists demanded that the society revoke Professor Pinker’s status as a “distinguished fellow” and strike his name from its list of media experts. The society’s executive committee declined to do so last week, stating: “It is not the mission of the society to control the opinions of its members, nor their expression.”

But a charge of racial insensitivity carries power in the current climate, and the letter sounded another shot in the fraught cultural battles now erupting in academia and publishing.

Also this month, 153 intellectuals and writers — many of them political liberals — signed a letter in Harper’s Magazine that criticized the current intellectual climate as “constricted” and “intolerant.” That led to a fiery response from opposing liberal and leftist writers, who accused the Harper’s letter writers of elitism and hypocrisy.

In an era of polarizing ideologies, Professor Pinker, a linguist and social psychologist, is tough to pin down. He is a big supporter of Democrats, and donated heavily to former President Barack Obama, but he has denounced what he sees as the close-mindedness of heavily liberal American universities. He likes to publicly entertain ideas outside the academic mainstream, including the question of innate differences between the sexes and among different ethnic and racial groups. And he has suggested that the political left’s insistence that certain subjects are off limits contributed to the rise of the alt-right.

Reached at his home on Cape Cod, Professor Pinker, 65, noted that as a tenured faculty member and established author, he could weather the campaign against him. But he said it could chill junior faculty who hold views counter to prevailing intellectual currents.

“I have a mind-set that the world is a complex place we are trying to understand,” he said. “There is an inherent value to free speech, because no one knows the solution to problems a priori.”

He described his critics as “speech police” who “have trolled through my writings to find offensive lines and adjectives.”

The letter against him focuses mainly on his activity on Twitter, where he has some 600,000 followers. It points to his 2015 tweet of an article from The Upshot, the data and analysis-focused team at The New York Times, which suggested that the high number of police shootings of Black people may not have been caused by racial bias of individual police officers, but rather by the larger structural and economic realities that result in the police having disproportionately high numbers of encounters with Black residents.

“Data: Police don’t shoot blacks disproportionately,” Professor Pinker tweeted with a link to the article. “Problem: Not race, but too many police shootings.”

The linguists’ letter noted that the article made plain that police killings are a racial problem, and accused Professor Pinker of making “dishonest claims in order to obfuscate the role of systemic racism in police violence.”

But the article also suggested that, because every encounter with the police carries danger of escalation, any racial group interacting with the police frequently risked becoming victims of police violence, due to poorly trained officers, armed suspects or overreaction. That appeared to be the point of Professor Pinker’s tweet.

The linguists’ letter also accused the professor of engaging in racial dog whistles when he used the words “urban crime” and “urban violence” in other tweets.

But in those tweets, Professor Pinker had linked to the work of scholars who are widely described as experts on urban crime and urban violence and its decline.

“‘Urban’ appears to be a usual terminological choice in work in sociology, political science, law and criminology,” wrote Jason Merchant, vice provost and a linguistics professor at the University of Chicago, who defended Professor Pinker.

Another issue, Professor Pinker’s critics say, is contained in his 2011 book, “The Better Angels of Our Nature: Why Violence Has Declined.” In a wide-ranging description of crime and urban decay and its effect on the culture of the 1970s and 1980s, he wrote that “Bernhard Goetz, a mild-mannered engineer, became a folk hero for shooting four young muggers in a New York subway car.”

The linguists’ letter took strong issue with the words “mild-mannered,” noting that a neighbor later said that Goetz had spoken in racist terms of Latinos and Black people. He was not “mild-mannered” but rather intent on confrontation, they said.

The origin of the letter remains a mystery. Of 10 signers contacted by The Times, only one hinted that she knew the identity of the authors. Many of the linguists proved shy about talking, and since the letter first surfaced on Twitter on July 3, several prominent linguists have said their names had been included without their knowledge.

Several department chairs in linguistics and philosophy signed the letter, including Professor Barry Smith of the University at Buffalo and Professor Lisa Davidson of New York University. Professor Smith did not return calls and an email and Professor Davidson declined to comment when The Times reached out.

The linguists’ letter touched only lightly on questions that have proved storm-tossed for Professor Pinker in the past. In the debate over whether nature or nurture shapes human behavior, he has leaned toward nature, arguing that characteristics like psychological traits and intelligence are to some degree heritable.

He has also suggested that underrepresentation in the sciences could be rooted in part in biological differences between men and women. (He defended Lawrence Summers, the former Harvard president who in 2005 speculated that innate differences between the sexes might in part explain why fewer women succeed in science and math careers. Mr. Summers’s remark infuriated some female scientists and was among several controversies that led to his resignation the following year.)

And Professor Pinker has made high-profile blunders, such as when he provided his expertise on language for the 2007 defense of the financier Jeffrey Epstein on sex trafficking charges. He has said he did so free of charge and at the request of a friend, the Harvard law professor Alan Dershowitz, and regrets it.

The clash may also reflect the fact that Professor Pinker’s rosy outlook — he argues that the world is becoming a better place, by almost any measure, from poverty to literacy — sounds discordant during this painful moment of national reckoning with the still-ugly scars of racism and inequality.

The linguists’ society, like many academic and nonprofit organizations, recently released a wide-ranging statement calling for greater diversity in the field. It also urged linguists to confront how their research “might reproduce or work against racism.”

John McWhorter, a Columbia University professor of English and linguistics, cast the Pinker controversy within a moment when, he said, progressives look suspiciously at anyone who does not embrace the politics of racial and cultural identity.

“Steve is too big for this kerfuffle to affect him,” Professor McWhorter said. “But it’s depressing that an erudite and reasonable scholar is seen by a lot of intelligent people as an undercover monster.”

Because this is a fight involving linguists, it features some expected elements: intense arguments about imprecise wording and sly intellectual put-downs. Professor Pinker may have inflamed matters when he suggested in response to the letter that its signers lacked stature. “I recognize only one name among the signatories,’’ he tweeted. Such an argument, Byron T. Ahn, a linguistics professor at Princeton, wrote in a tweet of his own, amounted to “a kind of indirect ad hominem attack.”

The linguists insisted they were not attempting to censor Professor Pinker. Rather, they were intent on showing that he had been deceitful and used racial dog whistles, and thus, was a disreputable representative for linguistics.

“Any resulting action from this letter may make it clear to Black scholars that the L.S.A. is sensitive to the impact that tweets of this sort have on maintaining structures that we should be attempting to dismantle,” wrote Professor David Adger of Queen Mary University of London on his website.

That line of argument left Professor McWhorter, a signer of the letter in Harper’s, exasperated.

“We’re in this moment that’s like a collective mic drop, and civility and common sense go out the window,” he said. “It’s enough to cry racism or sexism, and that’s that.”

Language is learned in brain circuits that predate humans (Georgetown University)

PUBLIC RELEASE: 

GEORGETOWN UNIVERSITY MEDICAL CENTER

WASHINGTON — It has often been claimed that humans learn language using brain components that are specifically dedicated to this purpose. Now, new evidence strongly suggests that language is in fact learned in brain systems that are also used for many other purposes and even pre-existed humans, say researchers in PNAS (Early Edition online Jan. 29).

The research combines results from multiple studies involving a total of 665 participants. It shows that children learn their native language and adults learn foreign languages in evolutionarily ancient brain circuits that also are used for tasks as diverse as remembering a shopping list and learning to drive.

“Our conclusion that language is learned in such ancient general-purpose systems contrasts with the long-standing theory that language depends on innately-specified language modules found only in humans,” says the study’s senior investigator, Michael T. Ullman, PhD, professor of neuroscience at Georgetown University School of Medicine.

“These brain systems are also found in animals – for example, rats use them when they learn to navigate a maze,” says co-author Phillip Hamrick, PhD, of Kent State University. “Whatever changes these systems might have undergone to support language, the fact that they play an important role in this critical human ability is quite remarkable.”

The study has important implications not only for understanding the biology and evolution of language and how it is learned, but also for how language learning can be improved, both for people learning a foreign language and for those with language disorders such as autism, dyslexia, or aphasia (language problems caused by brain damage such as stroke).

The research statistically synthesized findings from 16 studies that examined language learning in two well-studied brain systems: declarative and procedural memory.

The results showed that how good we are at remembering the words of a language correlates with how good we are at learning in declarative memory, which we use to memorize shopping lists or to remember the bus driver’s face or what we ate for dinner last night.

Grammar abilities, which allow us to combine words into sentences according to the rules of a language, showed a different pattern. The grammar abilities of children acquiring their native language correlated most strongly with learning in procedural memory, which we use to learn tasks such as driving, riding a bicycle, or playing a musical instrument. In adults learning a foreign language, however, grammar correlated with declarative memory at earlier stages of language learning, but with procedural memory at later stages.

The correlations were large, and were found consistently across languages (e.g., English, French, Finnish, and Japanese) and tasks (e.g., reading, listening, and speaking tasks), suggesting that the links between language and the brain systems are robust and reliable.

The findings have broad research, educational, and clinical implications, says co-author Jarrad Lum, PhD, of Deakin University in Australia.

“Researchers still know very little about the genetic and biological bases of language learning, and the new findings may lead to advances in these areas,” says Ullman. “We know much more about the genetics and biology of the brain systems than about these same aspects of language learning. Since our results suggest that language learning depends on the brain systems, the genetics, biology, and learning mechanisms of these systems may very well also hold for language.”

For example, though researchers know little about which genes underlie language, numerous genes playing particular roles in the two brain systems have been identified. The findings from this new study suggest that these genes may also play similar roles in language. Along the same lines, the evolution of these brain systems, and how they came to underlie language, should shed light on the evolution of language.

Additionally, the findings may lead to approaches that could improve foreign language learning and language problems in disorders, Ullman says.

For example, various pharmacological agents (e.g., the drug memantine) and behavioral strategies (e.g., spacing out the presentation of information) have been shown to enhance learning or retention of information in the brain systems, he says. These approaches may thus also be used to facilitate language learning, including in disorders such as aphasia, dyslexia, and autism.

“We hope and believe that this study will lead to exciting advances in our understanding of language, and in how both second language learning and language problems can be improved,” Ullman concludes.

What happens to language as populations grow? It simplifies, say researchers (Cornell)

PUBLIC RELEASE: 

CORNELL UNIVERSITY

ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.

Their study was published in the Proceedings of the Royal Society B: Biological Sciences.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.

The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.

But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”

His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.

A mysterious 14-year cycle has been controlling our words for centuries (Science Alert)

Some of your favourite science words are making a comeback.

DAVID NIELD
2 DEC 2016

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.

You can try it out for yourself, and search for any word’s popularity over time.

But if there are certain nouns you’re fond of, make the most of them, because they might not be in common use for much longer.

The findings have been published in Palgrave Communications.

Most adults know more than 42,000 words (Science Daily)

Date:
August 16, 2016
Source:
Frontiers
Summary:
Armed with a new list of words and using the power of social media, a new study has found that by the age of 20, a native English-speaking American knows 42,000 dictionary words.

Dictionary. How many words do you know? Credit: © mizar_21984 / Fotolia

How many words do we know? It turns out that even language experts and researchers have a tough time estimating this.

Armed with a new list of words and using the power of social media, a new study published in Frontiers in Psychology, has found that by the age of twenty, a native English speaking American knows 42 thousand dictionary words.

“Our research got a huge push when a television station in the Netherlands asked us to organize a nation-wide study on vocabulary knowledge,” states Professor Marc Brysbaert of Ghent University in Belgium and leader of this study. “The test we developed was featured on TV and, in the first weekend, over 300 thousand Dutch speakers had done it — it really went viral.”

Realising how interested people are in finding out their vocabulary size, the team then made similar tests in English and Spanish. The English test has now been taken by almost one million people. It takes up to four minutes to complete and has been shared widely on Facebook and Twitter, giving the team access to an unprecedented amount of data.

“At the Centre of Reading Research we are investigating what determines the ease with which words are recognized;” explained Professor Brysbaert. The test includes a list of 62,000 words that he and his team have compiled.

He added: “As we made the list ourselves and have not used a commercially available dictionary list with copyright restrictions, it can be made available to everyone, and all researchers can access it.”

The test is simple. You are asked if the word on the screen is, or is not, an existing word in English. In each test, there are 70 words, and 30 letter sequences that look like words but are not actually existing words.

The test will also ask you for some personal information such as your age, gender, education level and native language. This has enabled the team to discover that the average twenty-year-old native English speaking American knows 42 thousand dictionary words. As we get older, we learn one new word every two days, which means that by the age of 60, we know an additional 6000 words.

“As a researcher, I am most interested in what this data can tell us about word prevalence, i.e. how well each word is known in a language;” added Professor Brysbaert.

“In Dutch, we have seen that this explains a lot about word processing times. People respond much faster to words known by all people than to words known by 95% of the population, even if the words used with the same frequency. We are convinced that word prevalence will become an important variable in word recognition research.”

With data from about 200 thousand people who speak English as a second language, the team can also start to look at how well these people know certain words, which could have implications for language education.

This is the largest study of its kind ever attempted. Professor Brysbaert has plans to improve the accuracy of the test and extend the list to include over 75,000 words.

“This work is part of the big data movement in research, where big datasets are collected to be mined;” he concluded.

“It also gives us a snapshot of English word knowledge at the beginning of the 21st century. I can imagine future language researchers will be interested in this database to see how English has evolved over 100 years, 1000 years and maybe even longer.”


Journal Reference:

  1. Marc Brysbaert, Michaël Stevens, Paweł Mandera, Emmanuel Keuleers. How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s AgeFrontiers in Psychology, 2016; 7 DOI: 10.3389/fpsyg.2016.01116

How philosophy came to disdain the wisdom of oral cultures (AEON)

01 June 2016

Justin E H Smith is a professor of history and philosophy of science at the Université Paris Diderot – Paris 7. He writes frequently for The New York Times and Harper’s Magazine. His latest book is The Philosopher: A History in Six Types(2016).

Published in association with Princeton University Press, an Aeon Partner

Edited by Marina Benjamin

ESSAY: We learn more about our language by listening to the wolves

IDEA: Why science needs to break the spell of reductive materialism

VIDEO: Does the meaning of words rest in our private minds or in our shared experience?

Idea sized ahron de leeuw 3224207371 bde659342e o

Ahron de Leeuw/Flickr

A poet, somewhere in Siberia, or the Balkans, or West Africa, some time in the past 60,000 years, recites thousands of memorised lines in the course of an evening. The lines are packed with fixed epithets and clichés. The bard is not concerned with originality, but with intonation and delivery: he or she is perfectly attuned to the circumstances of the day, and to the mood and expectations of his or her listeners.

If this were happening 6,000-plus years ago, the poet’s words would in no way have been anchored in visible signs, in text. For the vast majority of the time that human beings have been on Earth, words have had no worldly reality other than the sound made when they are spoken.

As the theorist Walter J Ong pointed out in Orality and Literacy: Technologizing the Word (1982), it is difficult, perhaps even impossible, now to imagine how differently language would have been experienced in a culture of ‘primary orality’. There would be nowhere to ‘look up a word’, no authoritative source telling us the shape the word ‘actually’ takes. There would be no way to affirm the word’s existence at all except by speaking it – and this necessary condition of survival is important for understanding the relatively repetitive nature of epic poetry. Say it over and over again, or it will slip away. In the absence of fixed, textual anchors for words, there would be a sharp sense that language is charged with power, almost magic: the idea that words, when spoken, can bring about new states of affairs in the world. They do not so much describe, as invoke.

As a consequence of the development of writing, first in the ancient Near East and soon after in Greece, old habits of thought began to die out, and certain other, previously latent, mental faculties began to express themselves. Words were now anchored and, though spellings could change from one generation to another, or one region to another, there were now physical traces that endured, which could be transmitted, consulted and pointed to in settling questions about the use or authority of spoken language.

Writing rapidly turned customs into laws, agreements into contracts, genealogical lore into history. In each case, what had once been fundamentally temporal and singular was transformed into something eternal (as in, ‘outside of time’) and general. Even the simple act of making everyday lists of common objects – an act impossible in a primary oral culture – was already a triumph of abstraction and systematisation. From here it was just one small step to what we now call ‘philosophy’.

Homer’s epic poetry, which originates in the same oral epic traditions as those of the Balkans or of West Africa, was written down, frozen, fixed, and from this it became ‘literature’. There are no arguments in the Iliad: much of what is said arises from metrical exigencies, the need to fill in a line with the right number of syllables, or from epithets whose function is largely mnemonic (and thus unnecessary when transferred into writing). Yet Homer would become an authority for early philosophers nonetheless: revealing truths about humanity not by argument or debate, but by declamation, now frozen into text.

Plato would express extreme concern about the role, if any, that poets should play in society. But he was not talking about poets as we think of them: he had in mind reciters, bards who incite emotions with living performances, invocations and channellings of absent persons and beings.

It is not orality that philosophy rejects, necessarily: Socrates himself rejected writing, identifying instead with a form of oral culture. Plato would also ensure the philosophical canonisation of his own mentor by writing down (how faithfully, we don’t know) what Socrates would have preferred to merely say, and so would have preferred to have lost to the wind. Arguably, it is in virtue of Plato’s recording that we might say, today, that Socrates was a philosopher.

Plato and Aristotle, both, were willing to learn from Homer, once he had been written down. And Socrates, though Plato still felt he had to write him down, was already engaged in a sort of activity very different from poetic recitation. This was dialectic: the structured, working-through of a question towards an end that has not been predetermined – even if this practice emerged indirectly from forms of reasoning only actualised with the advent of writing.

The freezing in text of dialectical reasoning, with a heavy admixture (however impure or problematic) of poetry, aphorism and myth, became the model for what, in the European tradition, was thought of as ‘philosophy’ for the next few millennia.

Why are these historical reflections important today? Because what is at stake is nothing less than our understanding of the scope and nature of philosophical enquiry.

The Italian philosopher of history Giambattista Vico wrote in his ScienzaNuova (1725): ‘the order of ideas must follow the order of institutions’. This order was, namely: ‘First the woods, then cultivated fields and huts, next little houses and villages, thence cities, finally academies and philosophers.’ It is implicit for Vico that the philosophers in these academies are not illiterate. The order of ideas is the order of the emergence of the technology of writing.

Within academic philosophy today, there is significant concern arising from how to make philosophy more ‘inclusive’, but no interest at all in questioning Vico’s order, in going back and recuperating what forms of thought might have been left behind in the woods and fields.

The groups ordinarily targeted by philosophy’s ‘inclusivity drive’ already dwell in the cities and share in literacy, even if discriminatory measures often block their full cultivation of it. No arguments are being made for the inclusion of people belonging to cultures that value other forms of knowledge: there are no efforts to recruit philosophers from among Inuit hunters or Hmong peasants.

The practical obstacles to such recruitment from a true cross-section of humanity are obvious. Were it to happen, the simple process of moving from traditional ways of life into academic institutions would at the same time dilute and transform the perspectives that are deserving of more attention. Irrespective of such unhappy outcomes, there is already substantial scholarship on these forms of thought accumulated in philosophy’s neighbouring disciplines – notably history, anthropology, and world literatures – to which philosophers already have access. It’s a literature that could serve as a corrective to the foundational bias, present since the emergence of philosophy as a distinct activity.

As it happens, there are few members of primary oral cultures left in the world. And yet from a historical perspective the great bulk of human experience resides with them. There are, moreover, members of literate cultures, and subcultures, whose primary experience of language is oral, based in storytelling, not argumentation, and that is living and charged, not fixed and frozen. Plato saw these people as representing a lower, and more dangerous, use of language than the one worthy of philosophers.

Philosophers still tend to disdain, or at least to conceive as categorically different from their own speciality, the use of language deployed by bards and poets, whether from Siberia or the South Bronx. Again, this disdain leaves out the bulk of human experience. Until it is eradicated, the present talk of the ideal of inclusion will remain mere lip-service.

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

Chimps joining new troop learn its ‘words’: study (Reuters)

BY SHARON BEGLEY

NEW YORK, Thu Feb 5, 2015 1:03pm EST

(Reuters) – Just as Bostonians moving to Tokyo ditch “grapefruit” and adopt “pamplemousse,” so chimps joining a new troop change their calls to match those of their new troop, scientists reported on Thursday in the journal Current Biology.

The discovery represents the first evidence that animals besides humans can replace the vocal sounds their native group uses for specific objects – in the chimps’ case, apples – with those of their new community.

One expert on chimp vocalizations, Bill Hopkins of Yerkes National Primate Research Center in Atlanta, who was not involved in the study, questioned some of its methodology, such as how the scientists elicited and recorded the chimps’ calls, but called it “interesting work.”

Chimps have specific grunts, barks, hoots and other vocalizations for particular foods, for predators and for requests such as “look at me,” which members of their troop understand.

Earlier studies had shown that these primates, humans’ closest living relatives, can learn totally new calls in research settings through intensive training. And a 2012 study led by Yerkes’ Hopkins showed that young chimps are able to pick up sounds meaning “human, pay attention to me,” from their mothers.

But no previous research had shown that chimps can replace a call they had used for years with one used by another troop. Instead, primatologists had thought that sounds referring to objects in the environment were learned at a young age and essentially permanent, with any variations reflecting nuances such as how excited the animal is about, say, a banana.

In the new research, scientists studied adult chimpanzees that in 2010 had been moved from a safari park in the Netherlands to Scotland’s Edinburgh Zoo, to live with nine other adults in a huge new enclosure.

It took three years, and the formation of strong social bonds among the animals, but the grunt that the seven Dutch chimps used for “apple” (a favorite food) changed from a high-pitched eow-eow-eow to the lower-pitched udh-udh-udh used by the six Scots, said co-author Simon Townsend of the University of Zurich. The change was apparent even to non-chimp-speakers (scientists).

“We showed that, through social learning, the chimps could change their vocalizations,” Townsend said in an interview. That suggests human language isn’t unique in using socially-learned sounds to signify objects.

Unanswered is what motivated the Dutch chimps to sound more like the Scots: to be better understood, or to fit in by adopting the reining patois?

(Reporting by Sharon Begley; Editing by Nick Zieminski)

Especialistas criticam problemas no acordo ortográfico (Agência Brasil)

Assunto está em debate na Comissão de Educação do Senado

O professor Pasquale Cipro Neto defendeu nesta quarta-feira (22) revisão no Acordo Ortográfico da Língua Portuguesa. “O texto do acordo é tão cheio de problema que foi preciso a Academia [Brasileira de Letras] publicar nota explicativa [sobre pontos do acordo]. Por que foi preciso isso? Porque há problemas”, ressaltou o professor, ao participar do segundo dia de debates sobre o assunto na Comissão de Educação do Senado.

Segundo Pasquale, o Brasil saiu na frente dos demais países signatários na implementação do acordo impedindo uma adoção simultânea da nova regra. Para ele, houve atropelo e falta de organização do país no processo. “Nós não podemos ir adiante com um texto que carece de polimento, soluções concretas”, disse.

As diversas situações do uso do hífen, considerado pelo professor uma das grandes fragilidades da norma, foi um dos pontos mais criticados. Para Pasquale Neto, no texto do acordo, “o hífen foi maltratado, mal resolvido”. A seu ver, a questão precisa ser solucionada. De acordo com ele, é inexplicável o fato da palavra “pé-de-meia” ser escrita com hífen e “pé de moleque”, não.

Para a professora Stella Maris Bortoni de Figueiredo Ricardo, integrante da Associação Brasileira de Linguística (Abralin), qualquer sugestão de mudança deve ser acordada com os países signatários. “A Abralin recomenda que se consolide o Acordo Ortográfico de 1990, sem que haja nenhuma alteração unilateral. Qualquer alteração que se queira fazer no acordo, que seja feito no âmbito da CPLP  [Comunidade dos Países de Língua Portuguesa] e do Iilp [Instituto Internacional da Língua Portuguesa]”, defendeu.

Para debater as sugestões visando a melhorar o acordo, a Comissão de Educação do Senado criou, em 2013, grupo técnico de trabalho formado pelos professores Ernani Pimentel e Pasquale Cipro Neto, que deverão apresentar uma síntese em março de 2015. Por interferência da comissão, a implantação definitiva foi adiada de janeiro de 2013 para janeiro de 2016 por decreto da presidenta Dilma Rousseff.

Na rodada de ontem (21) o presidente do Centro de Estudos Linguísticos da Língua Portuguesa, Ernani Pimentel, polemizou a discussão ao cobrar maior simplificação gramatical. Ele lidera movimento para adoção de critério fonético na ortografia, ou seja, a escrita das palavras orientada pela forma como se fala. Por esse critério, a palavra “chuva”, por exemplo, seria escrita com x (xuva), sem preocupação em considerar a origem. Para o professor, a simplificação evitaria que as novas gerações sejam submetidas a “regras ultrapassadas que exigem decoreba”.

A sugestão foi rechaçada pelo gramático Evanildo Bechara que considera que a simplificação fonética, “aparentemente ideal”, resultaria em mais problemas que soluções, pois extinguiria as palavras homófonas – aquelas que têm o mesmo som, mas com escrita e significados diferentes. Segundo ele, as palavras seção, sessão e cessão, ficariam reduzidas a uma só grafia – sesão –, o que prejudicaria a compreensão da mensagem. “Aparentemente teríamos resolvido um problema ortográfico, mas criaríamos um problema maior na função da língua, que é a comunicação entre as pessoas”, lembrou.

O gramático avalia que o acordo reúne qualidades e representa um avanço para o uso do idioma e para unificar regras entre os países lusófonos. Ele ressaltou que os países que assinaram o acordo poderão, depois da implementação das novas regras, aprovar modificações e ajustes, caso necessário.

Para o presidente da comissão, senador Cyro Miranda (PSDB-GO), a intenção dos debates não é alterar o acordo, uma vez que, segundo ele, o papel cabe ao Executivo, em entendimento com os demais países signatários. “Nossa obrigação é chamar as pessoas envolvidas para dar opinião. Mas quem toma a frente é o Ministério da Educação e o Ministério de Relações Exteriores. Estamos mostrando as dificuldades e se, for possível, vamos contribuir”, disse.

(Karine Melo / Agência Brasil)

http://agenciabrasil.ebc.com.br/educacao/noticia/2014-10/especialistas-criticam-problemas-no-acordo-ortografico

Saving Native Languages and Culture in Mexico With Computer Games (Indian Country)

Thinkstock

9/21/14

Indigenous children in Mexico can now learn their mother tongues with specialized computer games, helping to prevent the further loss of those languages across the country.

“Three years ago, before we employed these materials, we were on the verge of seeing our children lose our Native languages,” asserted Matilde Hernandez, a teacher in Zitacuaro, Michoacan.

“Now they are speaking and singing in Mazahua as if that had never happened,” Hernandez said, referring to computer software that provides games and lessons in most of the linguistic families of the country including Mazahua, Chinanteco, Nahuatl of Puebla, Tzeltal, Mixteco, Zapateco, Chatino and others.

The new software was created by scientists and educators in two research institutions in Mexico: the Victor Franco Language and Culture Lab (VFLCL) of the Center for Investigations and Higher Studies in Social Anthropology (CIHSSA); and the Computer Center of the National Institute of Astrophysics, Optics and Electronics (NIAOE).

According to reports released this summer, the software was developed as a tool to help counteract the educational lag in indigenous communities and to employ these educational technologies so that the children may learn various subjects in an entertaining manner while reinforcing their Native language and culture.

“This software – divided into three methodologies for three different groups of applications – was made by dedicated researchers who have experience with Indigenous Peoples,” said Dr. Frida Villavicencio, Coordinator of the VLFCL’s Language Lab.

“We must have an impact on the children,” she continued, “offering them better methodologies for learning their mother tongues, as well as for learning Spanish and for supporting their basic education in a fun way.”

Villavicencio pointed out that the games and programs were not translated from the Spanish but were developed in the Native languages with the help of Native speakers. She added that studies from Mexico’s National Institute of Indigenous Languages (NIIL) show that the main reason why indigenous languages disappear, or are in danger of doing so, is because in each generation fewer and fewer of the children speak those languages.

“We need bilingual children only in that way can we preserve their languages,” she added.

Read more at http://indiancountrytodaymedianetwork.com/2014/09/21/saving-native-languages-and-culture-mexico-computer-games-156961

How learning to talk is in the genes (Science Daily)

Date: September 16, 2014

Source: University of Bristol

Summary: Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Scientists discovered a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.


Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Credit: © witthaya / Fotolia

Researchers have found evidence that genetic factors may contribute to the development of language during infancy.

Scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol worked with colleagues around the world to discover a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Children produce words at about 10 to 15 months of age and our range of vocabulary expands as we grow — from around 50 words at 15 to 18 months, 200 words at 18 to 30 months, 14,000 words at six-years-old and then over 50,000 words by the time we leave secondary school.

The researchers found the genetic link during the ages of 15 to 18 months when toddlers typically communicate with single words only before their linguistic skills advance to two-word combinations and more complex grammatical structures.

The results, published in Nature Communications today [16 Sept], shed further light on a specific genetic region on chromosome 3, which has been previously implicated in dyslexia and speech-related disorders.

The ROBO2 gene contains the instructions for making the ROBO2 protein. This protein directs chemicals in brain cells and other neuronal cell formations that may help infants to develop language but also to produce sounds.

The ROBO2 protein also closely interacts with other ROBO proteins that have previously been linked to problems with reading and the storage of speech sounds.

Dr Beate St Pourcain, who jointly led the research with Professor Davey Smith at the MRC Integrative Epidemiology Unit, said: “This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans.”

Dr Claire Haworth, one of the lead authors, based at the University of Warwick, commented: “In this study we found that results using DNA confirm those we get from twin studies about the importance of genetic influences for language development. This is good news as it means that current DNA-based investigations can be used to detect most of the genetic factors that contribute to these early language skills.”

The study was carried out by an international team of scientists from the EArly Genetics and Lifecourse Epidemiology Consortium (EAGLE) and involved data from over 10,000 children.

Journal Reference:
  1. Beate St Pourcain, Rolieke A.M. Cents, Andrew J.O. Whitehouse, Claire M.A. Haworth, Oliver S.P. Davis, Paul F. O’Reilly, Susan Roulstone, Yvonne Wren, Qi W. Ang, Fleur P. Velders, David M. Evans, John P. Kemp, Nicole M. Warrington, Laura Miller, Nicholas J. Timpson, Susan M. Ring, Frank C. Verhulst, Albert Hofman, Fernando Rivadeneira, Emma L. Meaburn, Thomas S. Price, Philip S. Dale, Demetris Pillas, Anneli Yliherva, Alina Rodriguez, Jean Golding, Vincent W.V. Jaddoe, Marjo-Riitta Jarvelin, Robert Plomin, Craig E. Pennell, Henning Tiemeier, George Davey Smith. Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 2014; 5: 4831 DOI:10.1038/ncomms5831

How the IPCC is sharpening its language on climate change (The Carbon Brief)

01 Sep 2014, 17:40

Simon Evans

Barometer | Shutterstock

The Intergovernmental Panel on Climate Change (IPCC) is sharpening the language of its latest draft synthesis report, seen by Carbon Brief.

Not only is the wording around how the climate is changing more decisive, the evidence the report references is stronger too, when compared to the  previous version published in 2007.

The synthesis report, due to be published on 2 November, will wrap up the IPCC’s fifth assessment (AR5) of climate change. It will summarise and draw together the information in IPCC reports on the science of climate change, its  impacts and the  ways it can be addressed.

We’ve compared a draft of the synthesis report with that published in 2007 to find out how they compare. Here are the key areas of change.

Irreversible impacts are being felt already

The AR5 draft synthesis begins with a decisive statement that human influence on the climate is “clear”, that recent emissions are the highest in history and that “widespread and consequential impacts” are already being felt.

This opening line shows how much has changed in the way the authors present their findings. In contrast, the 2007 report opened with a discussion of scientific progress and an extended paragraph on definitions.

There are also a couple of clear thematic changes in the 2014 draft. The first, repeated frequently throughout, is the idea that climate change impacts are already being felt.

For instance it says that the height of coastal floods has already increased and that climate-change-related risks from weather extremes such as heatwaves and heavy rain are “already moderate”.

These observations are crystallised in a long section on Article 2 of the UN’s climate change convention, which has been signed by every country of the world. Article 2 says that the objective of the convention is to avoid dangerous climate change.

The AR5 draft implies the world may already have failed in this task:

“Depending on value judgements and specific circumstances, currently observed impacts might already be considered dangerous for some communities.”

The second theme is a stronger emphasis on irreversible impacts compared to the 2007 version. The 2014 draft says:

“Continued emission of greenhouse gases will cause further warming and long-lasting changes in all components of the climate system, increasing the likelihood of severe, pervasive and irreversible impacts for people and ecosystems.”

It says that a large fraction of warming will be irreversible for hundreds to thousands of years and that the Greenland ice sheet will be lost when warming reaches between one and four degrees above pre-industrial temperatures. Current warming since pre-industrial times is about 0.8 degrees celsius.

In effect the report has switched tense from future conditional (“could experience”) to present continuous (“are experiencing”).  For instance it says there are signs that some corals and Arctic ecosystems “are already experiencing irreversible regime shifts” because of warming.

Stronger evidence than before

As well as these thematic changes in the use of language, the AR5 synthesis comes to stronger conclusions in many other areas.

This is largely because the scientific evidence has solidified in the intervening seven years, the IPCC says.

We’ve drawn together a collection of side-by-side statements so you can see for yourself how the conclusions have changed. Some of the shifts in language are subtle – but they are significant all the same.

IPCC Table With Logo

Source: IPCC AR4 Synthesis Report, draft AR5 Synthesis Report

Climate alarmism or climate realism?

The authors of the latest synthesis report seem to have made an effort to boost the impact of their words. They’ve used clearer and more direct language along with what appears to be a stronger emphasis on the negative consequences of inaction.

The language around relying on adaptation to climate change has also shifted. It now more clearly emphasises the need for mitigation to cut emissions, if the worst impacts of warming are to be avoided.

Some are bound to read this as an unwelcome excursion into advocacy. But others will insist it is simply a case of better presenting the evidence that was already there, along with advances in scientific knowledge.

Government representatives have the chance to go over the draft AR5 synthesis report with a fine toothcomb when they meet during 27-31 October.

Will certain countries try to tone down the wording, as they have been accused of doing in the past? Or will the new, more incisive language make the final cut?

To find out, tune in on 2 November when the final synthesis report will be published.

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (arxiv.org/abs/1407.7094).

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”

We speak as we feel, we feel as we speak (Science Daily)

Date: June 26, 2014

Source: University of Cologne – Universität zu Köln

Summary: Ground-breaking experiments have been conduced to uncover the links between language and emotions. Researchers were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa. The authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds.

Researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. Credit: Image courtesy of University of Cologne – Universität zu Köln 

A team of researchers headed by the Erfurt-based psychologist Prof. Ralf Rummer and the Cologne-based phoneticist Prof. Martine Grice has carried out some ground-breaking experiments to uncover the links between language and emotions. They were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa.

The research project looked at the question of whether and to what extent the meaning of words is linked to their sound. The specific focus of the project was on two special cases; the sound of the long ‘i’ vowel and that of the long, closed ‘o’ vowel. Rummer and Grice were particularly interested in finding out whether these vowels tend to occur in words that are positively or negatively charged in terms of emotional impact. For this purpose, they carried out two fundamental experiments, the results of which have now been published in Emotion, the journal of the American Psychological Association.

In the first experiment, the researchers exposed test subjects to film clips designed to put them in a positive or a negative mood and then asked them to make up ten artificial words themselves and to speak these out loud. They found that the artificial words contained significantly more ‘i’s than ‘o’s when the test subjects were in a positive mood. When in a negative mood, however, the test subjects formulated more ‘words’ with ‘o’s.

The second experiment was used to determine whether the different emotional quality of the two vowels can be traced back to the movements of the facial muscles associated with their articulation. Rummer and Grice were inspired by an experimental configuration developed in the 1980s by a team headed by psychologist Fritz Strack. These researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. In the first case, the test subjects were required to place the pen between their teeth and in the second case between their lips. While their zygomaticus major muscle was contracted, the test subjects found the cartoons significantly more amusing. Instead of this ‘pen-in-mouth test’, the team headed by Rummer and Grice now conducted an experiment in which they required their test subjects to articulate an ‘i’ sound (contraction of the zygomaticus major muscle) or an ‘o’ sound (contraction of the orbicularis oris muscle) every second while viewing cartoons. The test subjects producing the ‘i’ sounds found the same cartoons significantly more amusing than those producing the ‘o’ sounds instead.

In view of this outcome, the authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds. And thanks to the results of their two experiments, Rummer and Grice now have an explanation for a much-discussed phenomenon. The tendency for ‘i’ sounds to occur in positively charged words (such as ‘like’) and for ‘o’ sounds to occur in negatively charged words (such as ‘alone’) in many languages appears to be linked to the corresponding use of facial muscles in the articulation of vowels on the one hand and the expression of emotion on the other.

Journal Reference:

  1. Ralf Rummer, Judith Schweppe, René Schlegelmilch, Martine Grice. Mood is linked to vowel type: The role of articulatory movements.Emotion, 2014; 14 (2): 246 DOI: 10.1037/a0035752

Rapid Language Evolution in 19th-century Brazil: Data Mining, Literary Analysis and Evolutionary Biology – A Study of Six Centuries of Portuguese-language Texts (Stanford University)

Reporter: Aviva Lev-Ari, PhD, RN

Stanford collaboration offers new perspectives on evolution of Brazilian language

Using a novel combination of data mining, literary analysis and evolutionary biology to study six centuries of Portuguese-language texts, Stanford scholars discover the literary roots of rapid language evolution in 19th-century Brazil.

L.A. Cicero Stanford biology Professor Marcus Feldman, left, and Cuahtemoc Garcia-Garcia, a graduate student in Iberian and Latin American Cultures, combined forces to investigate the evolution of Portuguese as spoken in Brazil.

Literature and biology may not seem to overlap in their endeavors, but a Stanford project exploring the evolution of written language in Brazil is bringing the two disciplines together.

Over the last 18 months, Iberian and Latin American Cultures graduate student Cuauhtémoc García-García and biology Professor Marcus Feldman have been working together to trace the evolution of the  Brazilian Portuguese language through literature.

By combining Feldman’s expertise in mathematical analysis of cultural evolution with García-García’s knowledge of Latin American culture and computer programming, they have produced quantifiable evidence of rapid historical changes in written Brazilian Portuguese in the 19th and 20th centuries.

Specifically, Feldman and García-García are studying the changing use of words in tens of thousands of texts, with a focus on the personal pronouns that Brazilians used to address one another.

Their digital analysis of linguistics development in literary texts reflects Brazil’s complex colonial history.

The change in the use of personal pronouns, a daily part of social and cultural interaction, formed part of an evolving linguistic identity that was specific to Brazil, and not its Portuguese colonizers.

“We believe that this fast transition in the written language was due primarily to the approximately 300-year prohibition of both the introduction of the printing press and the foundation of universities in Brazil under Portuguese rule,” García-García said.

What Feldman and García-García found was that spoken language did in fact evolve during those 300 years, but little written evidence of that process exists because colonial restrictions on printing and literacy prevented language development in the written form.

A national sentiment of “write as we speak” arose in Brazil after Portuguese rule ended. García-García said their data shows an abrupt introduction in written texts of the spoken pronouns that were developed during the 300-year colonization period.

Drawing on Feldman’s experience with theoretical and statistical evolutionary models, García-García developed computer programs that count certain words to see how often they appear and how their use has changed over hundreds of years.

In Brazilian literary works produced in the post-colonial period, Feldman said, they have “found examples of written linguistic evolution over short time periods, contrary to the longer periods that are typical for changes in language.”

The findings will figure prominently in García-García’s dissertation, which addresses the transmission of written language across time and space.

The project’s source materials include about 70,000 digitized works in Portuguese from the 13th to the 21st century, ranging from literature and newspapers to technical manuals and pamphlets.

García-García, a member of The Digital Humanities Focal Group at Stanford, said their research “shows how written language changed, and through these changes in pronoun use, we now have a better understanding of how Brazilian writing evolved following the introduction of the printing press.”

Feldman, a population geneticist and one of the founders of the quantitative theory of cultural evolution, said he sees their project as a natural approach to linguistic evolution.

“I believe that evolutionary science and the humanities have a lot to offer each other in both theoretical and empirical explorations,” Feldman said.

Language by the numbers

García-García became interested in language evolution while studying Brazilian Portuguese under the instruction of Stanford lecturer Lyris Wiedemann. He approached Feldman, proposing an evolutionary study of Brazilian Portuguese, and Feldman agreed to help him analyze the data. García-García then enlisted Stanford lecturer Agripino Silveira, who provided linguistic expertise.

García-García worked with Stanford Library curators Glen Worthey, Adan Griego and Everardo Rodriguez for more than a year to develop the technical infrastructure and copyright clearance he needed to access Stanford’s entire digitized corpus of Portuguese language texts. After incorporating even more source material from the HathiTrust digital archive, García-García began the time-consuming task of “cleaning” the corpus, so data could be effectively mined from it.

“Sometimes there were duplicates, issues with the digitization, and works with multiple editions that created ‘noise’ in the corpus,” he said.

Following months of preparation, Feldman and García-García were able to begin data mining. Specifically, they counted the incidences of two pronouns, tu and você, which both mean the singular “you,” and how their incidence in literature changed over time.

“After running various searches, I could correlate results and see how and when certain words were used to build up a comprehensive image of this evolution,” he said.

Tu was – and still is – used in Portugal as the typical way to say ‘you.’ But, in Brazil, você is the more normal way to say it, particularly in major cities like Rio de Janeiro and São Paulo where the majority of the population lives,” García-García explained.

However, that was not always the case. When Brazil was a Portuguese colony, and up until the arrival of the printing press in1808, tu was the canonical form in written language.

As part of the run-up to independence in 1822, universities and printing presses were established in Brazil for the first time in 1808, having been prohibited by the Portuguese colonizers in what García-García calls “cultural repression.”

By the late 19th century, você emerged as the way to address people, shedding part of the colonial legacy, and tu quickly became less prominent in written Brazilian Portuguese.

“Our findings quantifiably show how pronoun use developed. We have found that around 1840, vocêwas used about 10-15 percent of the time by authors to say ‘you.’ By the turn of the century, this had increased to about 70 percent,” García-García said.

“Our data suggest that você was rarely used in the late 17th and 18th centuries, but really appears and takes hold in the middle of the 19th century, a few decades after 1808. Thus, the late arrival of the printing press marks a critical point for understanding the evolution of written Portuguese in Brazil, ” he said.

From Romanticism to realism

Their research revealed an intriguing literary coincidence – the period of transition from tu to vocêcorrelated with the broad change in the dominant literary genre in Brazilian literature from European Romanticism to Latin American realism.

Interestingly, the researchers noticed that the rapid change was most evident several decades after Brazil’s independence in the 1820s because it took that long for Brazilian writers to develop their own voice and style.

For centuries Brazilian writers were forced to write in the style of the Portuguese, but as García-García said, “with their new freedom they wanted to write stories that reflected their national identity.”

“Machado de Assis, arguably Brazil’s greatest author, is a fine example. His early novels are archetypally Romanticist, and then his later novels are deeply Realist, and the use of the pronouns shift from one to the other,” García-García said.

Nonetheless, in Machado’s work there is sometimes a purposeful switch back to the tu form if, for example, the author wanted to evoke a certain sentiment or change the narrative voice.

“The data-mining project cannot ascertain subtle uses of words and how, in some works, the pronouns are ‘interchangeable,’” he added.

Computational expertise was no substitute for literary expertise, and García-García used the two disciplines in tandem to get a clearer picture in his data.

“I had to stop using the computer and go back to a close reading of a large sample of books, and the literary genre change reflects this period of post-colonial social and historical change,” he said.

Feldman and García-García hope to use their methodology to explore different languages.

“Next we hope to study the digitized Spanish language corpus, which currently comprises close to a quarter of a million works from the last 900 years,” García-García said.

Tom Winterbottom is a doctoral candidate in Iberian and Latin American Cultures at Stanford. For more news about the humanities at Stanford, visit the Human Experience.

http://news.stanford.edu/news/2014/june/evolution-language-brazil-060414.html

Talking Neanderthals challenge the origins of speech (Science Daily)

Date:

March 2, 2014

Source: University of New England

Summary: We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.

A model of an adult Neanderthal male head and shoulders on display in the Hall of Human Origins in the Smithsonian Museum of Natural History in Washington, D.C. Reconstruction based on the Shanidar 1 fossil (c. 80-60 kya). Credit: By reconstruction: John Gurche; photograph: Tim Evanson [CC-BY-SA-2.0], via Wikimedia Commons

We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research by an expert from the University of New England shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.

Pinpointing the origin and evolution of speech and human language is one of the longest running and most hotly debated topics in the scientific world. It has long been believed that other beings, including the Neanderthals with whom our ancestors shared Earth for thousands of years, simply lacked the necessary cognitive capacity and vocal hardware for speech.

Associate Professor Stephen Wroe, a zoologist and palaeontologist from UNE, along with an international team of scientists and the use of 3D x-ray imaging technology, made the revolutionary discovery challenging this notion based on a 60,000 year-old Neanderthal hyoid bone discovered in Israel in 1989.

“To many, the Neanderthal hyoid discovered was surprising because its shape was very different to that of our closest living relatives, the chimpanzee and the bonobo. However, it was virtually indistinguishable from that of our own species. This led to some people arguing that this Neanderthal could speak,” A/Professor Wroe said.

“The obvious counterargument to this assertion was that the fact that hyoids of Neanderthals were the same shape as modern humans doesn’t necessarily mean that they were used in the same way. With the technology of the time, it was hard to verify the argument one way or the other.”

However advances in 3D imaging and computer modelling allowed A/Professor Wroe’s team to revisit the question.

“By analysing the mechanical behaviour of the fossilised bone with micro x-ray imaging, we were able to build models of the hyoid that included the intricate internal structure of the bone. We then compared them to models of modern humans. Our comparisons showed that in terms of mechanical behaviour, the Neanderthal hyoid was basically indistinguishable from our own, strongly suggesting that this key part of the vocal tract was used in the same way.

“From this research, we can conclude that it’s likely that the origins of speech and language are far, far older than once thought.”

Journal Reference:

  1. Ruggero D’Anastasio, Stephen Wroe, Claudio Tuniz, Lucia Mancini, Deneb T. Cesana, Diego Dreossi, Mayoorendra Ravichandiran, Marie Attard, William C. H. Parr, Anne Agur, Luigi Capasso. Micro-Biomechanics of the Kebara 2 Hyoid and Its Implications for Speech in NeanderthalsPLoS ONE, 2013; 8 (12): e82261 DOI: 10.1371/journal.pone.0082261

Language and Tool-Making Skills Evolved at the Same Time (Science Daily)

Sep. 3, 2013 — Research by the University of Liverpool has found that the same brain activity is used for language production and making complex tools, supporting the theory that they evolved at the same time.

Three hand axes produced by participants in the experiment. Front, back and side views are shown. (Credit: Image courtesy of University of Liverpool)

Researchers from the University tested the brain activity of 10 expert stone tool makers (flint knappers) as they undertook a stone tool-making task and a standard language test.

Brain blood flow activity measured

They measured the brain blood flow activity of the participants as they performed both tasks using functional Transcranial Doppler Ultrasound (fTCD), commonly used in clinical settings to test patients’ language functions after brain damage or before surgery.

The researchers found that brain patterns for both tasks correlated, suggesting that they both use the same area of the brain. Language and stone tool-making are considered to be unique features of humankind that evolved over millions of years.

Darwin was the first to suggest that tool-use and language may have co-evolved, because they both depend on complex planning and the coordination of actions but until now there has been little evidence to support this.

Dr Georg Meyer, from the University Department of Experimental Psychology, said: “This is the first study of the brain to compare complex stone tool-making directly with language.

Tool use and language co-evolved

“Our study found correlated blood-flow patterns in the first 10 seconds of undertaking both tasks. This suggests that both tasks depend on common brain areas and is consistent with theories that tool-use and language co-evolved and share common processing networks in the brain.”

Dr Natalie Uomini from the University’s Department of Archaeology, Classics & Egyptology, said: “Nobody has been able to measure brain activity in real time while making a stone tool. This is a first for both archaeology and psychology.”

The research was supported by the Leverhulme Trust, the Economic and Social Research Council and the British Academy. It is published in PLOS ONE.

Journal Reference:

  1. Natalie Thaïs Uomini, Georg Friedrich Meyer. Shared Brain Lateralization Patterns in Language and Acheulean Stone Tool Production: A Functional Transcranial Doppler Ultrasound StudyPLoS ONE, 2013; 8 (8): e72693 DOI: 10.1371/journal.pone.0072693