Arquivo da tag: Linguagem

Language is learned in brain circuits that predate humans (Georgetown University)

PUBLIC RELEASE: 

GEORGETOWN UNIVERSITY MEDICAL CENTER

WASHINGTON — It has often been claimed that humans learn language using brain components that are specifically dedicated to this purpose. Now, new evidence strongly suggests that language is in fact learned in brain systems that are also used for many other purposes and even pre-existed humans, say researchers in PNAS (Early Edition online Jan. 29).

The research combines results from multiple studies involving a total of 665 participants. It shows that children learn their native language and adults learn foreign languages in evolutionarily ancient brain circuits that also are used for tasks as diverse as remembering a shopping list and learning to drive.

“Our conclusion that language is learned in such ancient general-purpose systems contrasts with the long-standing theory that language depends on innately-specified language modules found only in humans,” says the study’s senior investigator, Michael T. Ullman, PhD, professor of neuroscience at Georgetown University School of Medicine.

“These brain systems are also found in animals – for example, rats use them when they learn to navigate a maze,” says co-author Phillip Hamrick, PhD, of Kent State University. “Whatever changes these systems might have undergone to support language, the fact that they play an important role in this critical human ability is quite remarkable.”

The study has important implications not only for understanding the biology and evolution of language and how it is learned, but also for how language learning can be improved, both for people learning a foreign language and for those with language disorders such as autism, dyslexia, or aphasia (language problems caused by brain damage such as stroke).

The research statistically synthesized findings from 16 studies that examined language learning in two well-studied brain systems: declarative and procedural memory.

The results showed that how good we are at remembering the words of a language correlates with how good we are at learning in declarative memory, which we use to memorize shopping lists or to remember the bus driver’s face or what we ate for dinner last night.

Grammar abilities, which allow us to combine words into sentences according to the rules of a language, showed a different pattern. The grammar abilities of children acquiring their native language correlated most strongly with learning in procedural memory, which we use to learn tasks such as driving, riding a bicycle, or playing a musical instrument. In adults learning a foreign language, however, grammar correlated with declarative memory at earlier stages of language learning, but with procedural memory at later stages.

The correlations were large, and were found consistently across languages (e.g., English, French, Finnish, and Japanese) and tasks (e.g., reading, listening, and speaking tasks), suggesting that the links between language and the brain systems are robust and reliable.

The findings have broad research, educational, and clinical implications, says co-author Jarrad Lum, PhD, of Deakin University in Australia.

“Researchers still know very little about the genetic and biological bases of language learning, and the new findings may lead to advances in these areas,” says Ullman. “We know much more about the genetics and biology of the brain systems than about these same aspects of language learning. Since our results suggest that language learning depends on the brain systems, the genetics, biology, and learning mechanisms of these systems may very well also hold for language.”

For example, though researchers know little about which genes underlie language, numerous genes playing particular roles in the two brain systems have been identified. The findings from this new study suggest that these genes may also play similar roles in language. Along the same lines, the evolution of these brain systems, and how they came to underlie language, should shed light on the evolution of language.

Additionally, the findings may lead to approaches that could improve foreign language learning and language problems in disorders, Ullman says.

For example, various pharmacological agents (e.g., the drug memantine) and behavioral strategies (e.g., spacing out the presentation of information) have been shown to enhance learning or retention of information in the brain systems, he says. These approaches may thus also be used to facilitate language learning, including in disorders such as aphasia, dyslexia, and autism.

“We hope and believe that this study will lead to exciting advances in our understanding of language, and in how both second language learning and language problems can be improved,” Ullman concludes.

Anúncios

What happens to language as populations grow? It simplifies, say researchers (Cornell)

PUBLIC RELEASE: 

CORNELL UNIVERSITY

ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.

Their study was published in the Proceedings of the Royal Society B: Biological Sciences.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.

The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.

But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”

His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.

A mysterious 14-year cycle has been controlling our words for centuries (Science Alert)

Some of your favourite science words are making a comeback.

DAVID NIELD
2 DEC 2016

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.

You can try it out for yourself, and search for any word’s popularity over time.

But if there are certain nouns you’re fond of, make the most of them, because they might not be in common use for much longer.

The findings have been published in Palgrave Communications.

Most adults know more than 42,000 words (Science Daily)

Date:
August 16, 2016
Source:
Frontiers
Summary:
Armed with a new list of words and using the power of social media, a new study has found that by the age of 20, a native English-speaking American knows 42,000 dictionary words.

Dictionary. How many words do you know? Credit: © mizar_21984 / Fotolia

How many words do we know? It turns out that even language experts and researchers have a tough time estimating this.

Armed with a new list of words and using the power of social media, a new study published in Frontiers in Psychology, has found that by the age of twenty, a native English speaking American knows 42 thousand dictionary words.

“Our research got a huge push when a television station in the Netherlands asked us to organize a nation-wide study on vocabulary knowledge,” states Professor Marc Brysbaert of Ghent University in Belgium and leader of this study. “The test we developed was featured on TV and, in the first weekend, over 300 thousand Dutch speakers had done it — it really went viral.”

Realising how interested people are in finding out their vocabulary size, the team then made similar tests in English and Spanish. The English test has now been taken by almost one million people. It takes up to four minutes to complete and has been shared widely on Facebook and Twitter, giving the team access to an unprecedented amount of data.

“At the Centre of Reading Research we are investigating what determines the ease with which words are recognized;” explained Professor Brysbaert. The test includes a list of 62,000 words that he and his team have compiled.

He added: “As we made the list ourselves and have not used a commercially available dictionary list with copyright restrictions, it can be made available to everyone, and all researchers can access it.”

The test is simple. You are asked if the word on the screen is, or is not, an existing word in English. In each test, there are 70 words, and 30 letter sequences that look like words but are not actually existing words.

The test will also ask you for some personal information such as your age, gender, education level and native language. This has enabled the team to discover that the average twenty-year-old native English speaking American knows 42 thousand dictionary words. As we get older, we learn one new word every two days, which means that by the age of 60, we know an additional 6000 words.

“As a researcher, I am most interested in what this data can tell us about word prevalence, i.e. how well each word is known in a language;” added Professor Brysbaert.

“In Dutch, we have seen that this explains a lot about word processing times. People respond much faster to words known by all people than to words known by 95% of the population, even if the words used with the same frequency. We are convinced that word prevalence will become an important variable in word recognition research.”

With data from about 200 thousand people who speak English as a second language, the team can also start to look at how well these people know certain words, which could have implications for language education.

This is the largest study of its kind ever attempted. Professor Brysbaert has plans to improve the accuracy of the test and extend the list to include over 75,000 words.

“This work is part of the big data movement in research, where big datasets are collected to be mined;” he concluded.

“It also gives us a snapshot of English word knowledge at the beginning of the 21st century. I can imagine future language researchers will be interested in this database to see how English has evolved over 100 years, 1000 years and maybe even longer.”


Journal Reference:

  1. Marc Brysbaert, Michaël Stevens, Paweł Mandera, Emmanuel Keuleers. How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s AgeFrontiers in Psychology, 2016; 7 DOI: 10.3389/fpsyg.2016.01116

How philosophy came to disdain the wisdom of oral cultures (AEON)

01 June 2016

Justin E H Smith is a professor of history and philosophy of science at the Université Paris Diderot – Paris 7. He writes frequently for The New York Times and Harper’s Magazine. His latest book is The Philosopher: A History in Six Types(2016).

Published in association with Princeton University Press, an Aeon Partner

Edited by Marina Benjamin

ESSAY: We learn more about our language by listening to the wolves

IDEA: Why science needs to break the spell of reductive materialism

VIDEO: Does the meaning of words rest in our private minds or in our shared experience?

Idea sized ahron de leeuw 3224207371 bde659342e o

Ahron de Leeuw/Flickr

A poet, somewhere in Siberia, or the Balkans, or West Africa, some time in the past 60,000 years, recites thousands of memorised lines in the course of an evening. The lines are packed with fixed epithets and clichés. The bard is not concerned with originality, but with intonation and delivery: he or she is perfectly attuned to the circumstances of the day, and to the mood and expectations of his or her listeners.

If this were happening 6,000-plus years ago, the poet’s words would in no way have been anchored in visible signs, in text. For the vast majority of the time that human beings have been on Earth, words have had no worldly reality other than the sound made when they are spoken.

As the theorist Walter J Ong pointed out in Orality and Literacy: Technologizing the Word (1982), it is difficult, perhaps even impossible, now to imagine how differently language would have been experienced in a culture of ‘primary orality’. There would be nowhere to ‘look up a word’, no authoritative source telling us the shape the word ‘actually’ takes. There would be no way to affirm the word’s existence at all except by speaking it – and this necessary condition of survival is important for understanding the relatively repetitive nature of epic poetry. Say it over and over again, or it will slip away. In the absence of fixed, textual anchors for words, there would be a sharp sense that language is charged with power, almost magic: the idea that words, when spoken, can bring about new states of affairs in the world. They do not so much describe, as invoke.

As a consequence of the development of writing, first in the ancient Near East and soon after in Greece, old habits of thought began to die out, and certain other, previously latent, mental faculties began to express themselves. Words were now anchored and, though spellings could change from one generation to another, or one region to another, there were now physical traces that endured, which could be transmitted, consulted and pointed to in settling questions about the use or authority of spoken language.

Writing rapidly turned customs into laws, agreements into contracts, genealogical lore into history. In each case, what had once been fundamentally temporal and singular was transformed into something eternal (as in, ‘outside of time’) and general. Even the simple act of making everyday lists of common objects – an act impossible in a primary oral culture – was already a triumph of abstraction and systematisation. From here it was just one small step to what we now call ‘philosophy’.

Homer’s epic poetry, which originates in the same oral epic traditions as those of the Balkans or of West Africa, was written down, frozen, fixed, and from this it became ‘literature’. There are no arguments in the Iliad: much of what is said arises from metrical exigencies, the need to fill in a line with the right number of syllables, or from epithets whose function is largely mnemonic (and thus unnecessary when transferred into writing). Yet Homer would become an authority for early philosophers nonetheless: revealing truths about humanity not by argument or debate, but by declamation, now frozen into text.

Plato would express extreme concern about the role, if any, that poets should play in society. But he was not talking about poets as we think of them: he had in mind reciters, bards who incite emotions with living performances, invocations and channellings of absent persons and beings.

It is not orality that philosophy rejects, necessarily: Socrates himself rejected writing, identifying instead with a form of oral culture. Plato would also ensure the philosophical canonisation of his own mentor by writing down (how faithfully, we don’t know) what Socrates would have preferred to merely say, and so would have preferred to have lost to the wind. Arguably, it is in virtue of Plato’s recording that we might say, today, that Socrates was a philosopher.

Plato and Aristotle, both, were willing to learn from Homer, once he had been written down. And Socrates, though Plato still felt he had to write him down, was already engaged in a sort of activity very different from poetic recitation. This was dialectic: the structured, working-through of a question towards an end that has not been predetermined – even if this practice emerged indirectly from forms of reasoning only actualised with the advent of writing.

The freezing in text of dialectical reasoning, with a heavy admixture (however impure or problematic) of poetry, aphorism and myth, became the model for what, in the European tradition, was thought of as ‘philosophy’ for the next few millennia.

Why are these historical reflections important today? Because what is at stake is nothing less than our understanding of the scope and nature of philosophical enquiry.

The Italian philosopher of history Giambattista Vico wrote in his ScienzaNuova (1725): ‘the order of ideas must follow the order of institutions’. This order was, namely: ‘First the woods, then cultivated fields and huts, next little houses and villages, thence cities, finally academies and philosophers.’ It is implicit for Vico that the philosophers in these academies are not illiterate. The order of ideas is the order of the emergence of the technology of writing.

Within academic philosophy today, there is significant concern arising from how to make philosophy more ‘inclusive’, but no interest at all in questioning Vico’s order, in going back and recuperating what forms of thought might have been left behind in the woods and fields.

The groups ordinarily targeted by philosophy’s ‘inclusivity drive’ already dwell in the cities and share in literacy, even if discriminatory measures often block their full cultivation of it. No arguments are being made for the inclusion of people belonging to cultures that value other forms of knowledge: there are no efforts to recruit philosophers from among Inuit hunters or Hmong peasants.

The practical obstacles to such recruitment from a true cross-section of humanity are obvious. Were it to happen, the simple process of moving from traditional ways of life into academic institutions would at the same time dilute and transform the perspectives that are deserving of more attention. Irrespective of such unhappy outcomes, there is already substantial scholarship on these forms of thought accumulated in philosophy’s neighbouring disciplines – notably history, anthropology, and world literatures – to which philosophers already have access. It’s a literature that could serve as a corrective to the foundational bias, present since the emergence of philosophy as a distinct activity.

As it happens, there are few members of primary oral cultures left in the world. And yet from a historical perspective the great bulk of human experience resides with them. There are, moreover, members of literate cultures, and subcultures, whose primary experience of language is oral, based in storytelling, not argumentation, and that is living and charged, not fixed and frozen. Plato saw these people as representing a lower, and more dangerous, use of language than the one worthy of philosophers.

Philosophers still tend to disdain, or at least to conceive as categorically different from their own speciality, the use of language deployed by bards and poets, whether from Siberia or the South Bronx. Again, this disdain leaves out the bulk of human experience. Until it is eradicated, the present talk of the ideal of inclusion will remain mere lip-service.

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

Chimps joining new troop learn its ‘words’: study (Reuters)

BY SHARON BEGLEY

NEW YORK, Thu Feb 5, 2015 1:03pm EST

(Reuters) – Just as Bostonians moving to Tokyo ditch “grapefruit” and adopt “pamplemousse,” so chimps joining a new troop change their calls to match those of their new troop, scientists reported on Thursday in the journal Current Biology.

The discovery represents the first evidence that animals besides humans can replace the vocal sounds their native group uses for specific objects – in the chimps’ case, apples – with those of their new community.

One expert on chimp vocalizations, Bill Hopkins of Yerkes National Primate Research Center in Atlanta, who was not involved in the study, questioned some of its methodology, such as how the scientists elicited and recorded the chimps’ calls, but called it “interesting work.”

Chimps have specific grunts, barks, hoots and other vocalizations for particular foods, for predators and for requests such as “look at me,” which members of their troop understand.

Earlier studies had shown that these primates, humans’ closest living relatives, can learn totally new calls in research settings through intensive training. And a 2012 study led by Yerkes’ Hopkins showed that young chimps are able to pick up sounds meaning “human, pay attention to me,” from their mothers.

But no previous research had shown that chimps can replace a call they had used for years with one used by another troop. Instead, primatologists had thought that sounds referring to objects in the environment were learned at a young age and essentially permanent, with any variations reflecting nuances such as how excited the animal is about, say, a banana.

In the new research, scientists studied adult chimpanzees that in 2010 had been moved from a safari park in the Netherlands to Scotland’s Edinburgh Zoo, to live with nine other adults in a huge new enclosure.

It took three years, and the formation of strong social bonds among the animals, but the grunt that the seven Dutch chimps used for “apple” (a favorite food) changed from a high-pitched eow-eow-eow to the lower-pitched udh-udh-udh used by the six Scots, said co-author Simon Townsend of the University of Zurich. The change was apparent even to non-chimp-speakers (scientists).

“We showed that, through social learning, the chimps could change their vocalizations,” Townsend said in an interview. That suggests human language isn’t unique in using socially-learned sounds to signify objects.

Unanswered is what motivated the Dutch chimps to sound more like the Scots: to be better understood, or to fit in by adopting the reining patois?

(Reporting by Sharon Begley; Editing by Nick Zieminski)

Especialistas criticam problemas no acordo ortográfico (Agência Brasil)

Assunto está em debate na Comissão de Educação do Senado

O professor Pasquale Cipro Neto defendeu nesta quarta-feira (22) revisão no Acordo Ortográfico da Língua Portuguesa. “O texto do acordo é tão cheio de problema que foi preciso a Academia [Brasileira de Letras] publicar nota explicativa [sobre pontos do acordo]. Por que foi preciso isso? Porque há problemas”, ressaltou o professor, ao participar do segundo dia de debates sobre o assunto na Comissão de Educação do Senado.

Segundo Pasquale, o Brasil saiu na frente dos demais países signatários na implementação do acordo impedindo uma adoção simultânea da nova regra. Para ele, houve atropelo e falta de organização do país no processo. “Nós não podemos ir adiante com um texto que carece de polimento, soluções concretas”, disse.

As diversas situações do uso do hífen, considerado pelo professor uma das grandes fragilidades da norma, foi um dos pontos mais criticados. Para Pasquale Neto, no texto do acordo, “o hífen foi maltratado, mal resolvido”. A seu ver, a questão precisa ser solucionada. De acordo com ele, é inexplicável o fato da palavra “pé-de-meia” ser escrita com hífen e “pé de moleque”, não.

Para a professora Stella Maris Bortoni de Figueiredo Ricardo, integrante da Associação Brasileira de Linguística (Abralin), qualquer sugestão de mudança deve ser acordada com os países signatários. “A Abralin recomenda que se consolide o Acordo Ortográfico de 1990, sem que haja nenhuma alteração unilateral. Qualquer alteração que se queira fazer no acordo, que seja feito no âmbito da CPLP  [Comunidade dos Países de Língua Portuguesa] e do Iilp [Instituto Internacional da Língua Portuguesa]”, defendeu.

Para debater as sugestões visando a melhorar o acordo, a Comissão de Educação do Senado criou, em 2013, grupo técnico de trabalho formado pelos professores Ernani Pimentel e Pasquale Cipro Neto, que deverão apresentar uma síntese em março de 2015. Por interferência da comissão, a implantação definitiva foi adiada de janeiro de 2013 para janeiro de 2016 por decreto da presidenta Dilma Rousseff.

Na rodada de ontem (21) o presidente do Centro de Estudos Linguísticos da Língua Portuguesa, Ernani Pimentel, polemizou a discussão ao cobrar maior simplificação gramatical. Ele lidera movimento para adoção de critério fonético na ortografia, ou seja, a escrita das palavras orientada pela forma como se fala. Por esse critério, a palavra “chuva”, por exemplo, seria escrita com x (xuva), sem preocupação em considerar a origem. Para o professor, a simplificação evitaria que as novas gerações sejam submetidas a “regras ultrapassadas que exigem decoreba”.

A sugestão foi rechaçada pelo gramático Evanildo Bechara que considera que a simplificação fonética, “aparentemente ideal”, resultaria em mais problemas que soluções, pois extinguiria as palavras homófonas – aquelas que têm o mesmo som, mas com escrita e significados diferentes. Segundo ele, as palavras seção, sessão e cessão, ficariam reduzidas a uma só grafia – sesão –, o que prejudicaria a compreensão da mensagem. “Aparentemente teríamos resolvido um problema ortográfico, mas criaríamos um problema maior na função da língua, que é a comunicação entre as pessoas”, lembrou.

O gramático avalia que o acordo reúne qualidades e representa um avanço para o uso do idioma e para unificar regras entre os países lusófonos. Ele ressaltou que os países que assinaram o acordo poderão, depois da implementação das novas regras, aprovar modificações e ajustes, caso necessário.

Para o presidente da comissão, senador Cyro Miranda (PSDB-GO), a intenção dos debates não é alterar o acordo, uma vez que, segundo ele, o papel cabe ao Executivo, em entendimento com os demais países signatários. “Nossa obrigação é chamar as pessoas envolvidas para dar opinião. Mas quem toma a frente é o Ministério da Educação e o Ministério de Relações Exteriores. Estamos mostrando as dificuldades e se, for possível, vamos contribuir”, disse.

(Karine Melo / Agência Brasil)

http://agenciabrasil.ebc.com.br/educacao/noticia/2014-10/especialistas-criticam-problemas-no-acordo-ortografico

Saving Native Languages and Culture in Mexico With Computer Games (Indian Country)

Thinkstock

9/21/14

Indigenous children in Mexico can now learn their mother tongues with specialized computer games, helping to prevent the further loss of those languages across the country.

“Three years ago, before we employed these materials, we were on the verge of seeing our children lose our Native languages,” asserted Matilde Hernandez, a teacher in Zitacuaro, Michoacan.

“Now they are speaking and singing in Mazahua as if that had never happened,” Hernandez said, referring to computer software that provides games and lessons in most of the linguistic families of the country including Mazahua, Chinanteco, Nahuatl of Puebla, Tzeltal, Mixteco, Zapateco, Chatino and others.

The new software was created by scientists and educators in two research institutions in Mexico: the Victor Franco Language and Culture Lab (VFLCL) of the Center for Investigations and Higher Studies in Social Anthropology (CIHSSA); and the Computer Center of the National Institute of Astrophysics, Optics and Electronics (NIAOE).

According to reports released this summer, the software was developed as a tool to help counteract the educational lag in indigenous communities and to employ these educational technologies so that the children may learn various subjects in an entertaining manner while reinforcing their Native language and culture.

“This software – divided into three methodologies for three different groups of applications – was made by dedicated researchers who have experience with Indigenous Peoples,” said Dr. Frida Villavicencio, Coordinator of the VLFCL’s Language Lab.

“We must have an impact on the children,” she continued, “offering them better methodologies for learning their mother tongues, as well as for learning Spanish and for supporting their basic education in a fun way.”

Villavicencio pointed out that the games and programs were not translated from the Spanish but were developed in the Native languages with the help of Native speakers. She added that studies from Mexico’s National Institute of Indigenous Languages (NIIL) show that the main reason why indigenous languages disappear, or are in danger of doing so, is because in each generation fewer and fewer of the children speak those languages.

“We need bilingual children only in that way can we preserve their languages,” she added.

Read more at http://indiancountrytodaymedianetwork.com/2014/09/21/saving-native-languages-and-culture-mexico-computer-games-156961

How learning to talk is in the genes (Science Daily)

Date: September 16, 2014

Source: University of Bristol

Summary: Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Scientists discovered a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.


Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Credit: © witthaya / Fotolia

Researchers have found evidence that genetic factors may contribute to the development of language during infancy.

Scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol worked with colleagues around the world to discover a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Children produce words at about 10 to 15 months of age and our range of vocabulary expands as we grow — from around 50 words at 15 to 18 months, 200 words at 18 to 30 months, 14,000 words at six-years-old and then over 50,000 words by the time we leave secondary school.

The researchers found the genetic link during the ages of 15 to 18 months when toddlers typically communicate with single words only before their linguistic skills advance to two-word combinations and more complex grammatical structures.

The results, published in Nature Communications today [16 Sept], shed further light on a specific genetic region on chromosome 3, which has been previously implicated in dyslexia and speech-related disorders.

The ROBO2 gene contains the instructions for making the ROBO2 protein. This protein directs chemicals in brain cells and other neuronal cell formations that may help infants to develop language but also to produce sounds.

The ROBO2 protein also closely interacts with other ROBO proteins that have previously been linked to problems with reading and the storage of speech sounds.

Dr Beate St Pourcain, who jointly led the research with Professor Davey Smith at the MRC Integrative Epidemiology Unit, said: “This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans.”

Dr Claire Haworth, one of the lead authors, based at the University of Warwick, commented: “In this study we found that results using DNA confirm those we get from twin studies about the importance of genetic influences for language development. This is good news as it means that current DNA-based investigations can be used to detect most of the genetic factors that contribute to these early language skills.”

The study was carried out by an international team of scientists from the EArly Genetics and Lifecourse Epidemiology Consortium (EAGLE) and involved data from over 10,000 children.

Journal Reference:
  1. Beate St Pourcain, Rolieke A.M. Cents, Andrew J.O. Whitehouse, Claire M.A. Haworth, Oliver S.P. Davis, Paul F. O’Reilly, Susan Roulstone, Yvonne Wren, Qi W. Ang, Fleur P. Velders, David M. Evans, John P. Kemp, Nicole M. Warrington, Laura Miller, Nicholas J. Timpson, Susan M. Ring, Frank C. Verhulst, Albert Hofman, Fernando Rivadeneira, Emma L. Meaburn, Thomas S. Price, Philip S. Dale, Demetris Pillas, Anneli Yliherva, Alina Rodriguez, Jean Golding, Vincent W.V. Jaddoe, Marjo-Riitta Jarvelin, Robert Plomin, Craig E. Pennell, Henning Tiemeier, George Davey Smith. Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 2014; 5: 4831 DOI:10.1038/ncomms5831

How the IPCC is sharpening its language on climate change (The Carbon Brief)

01 Sep 2014, 17:40

Simon Evans

Barometer | Shutterstock

The Intergovernmental Panel on Climate Change (IPCC) is sharpening the language of its latest draft synthesis report, seen by Carbon Brief.

Not only is the wording around how the climate is changing more decisive, the evidence the report references is stronger too, when compared to the  previous version published in 2007.

The synthesis report, due to be published on 2 November, will wrap up the IPCC’s fifth assessment (AR5) of climate change. It will summarise and draw together the information in IPCC reports on the science of climate change, its  impacts and the  ways it can be addressed.

We’ve compared a draft of the synthesis report with that published in 2007 to find out how they compare. Here are the key areas of change.

Irreversible impacts are being felt already

The AR5 draft synthesis begins with a decisive statement that human influence on the climate is “clear”, that recent emissions are the highest in history and that “widespread and consequential impacts” are already being felt.

This opening line shows how much has changed in the way the authors present their findings. In contrast, the 2007 report opened with a discussion of scientific progress and an extended paragraph on definitions.

There are also a couple of clear thematic changes in the 2014 draft. The first, repeated frequently throughout, is the idea that climate change impacts are already being felt.

For instance it says that the height of coastal floods has already increased and that climate-change-related risks from weather extremes such as heatwaves and heavy rain are “already moderate”.

These observations are crystallised in a long section on Article 2 of the UN’s climate change convention, which has been signed by every country of the world. Article 2 says that the objective of the convention is to avoid dangerous climate change.

The AR5 draft implies the world may already have failed in this task:

“Depending on value judgements and specific circumstances, currently observed impacts might already be considered dangerous for some communities.”

The second theme is a stronger emphasis on irreversible impacts compared to the 2007 version. The 2014 draft says:

“Continued emission of greenhouse gases will cause further warming and long-lasting changes in all components of the climate system, increasing the likelihood of severe, pervasive and irreversible impacts for people and ecosystems.”

It says that a large fraction of warming will be irreversible for hundreds to thousands of years and that the Greenland ice sheet will be lost when warming reaches between one and four degrees above pre-industrial temperatures. Current warming since pre-industrial times is about 0.8 degrees celsius.

In effect the report has switched tense from future conditional (“could experience”) to present continuous (“are experiencing”).  For instance it says there are signs that some corals and Arctic ecosystems “are already experiencing irreversible regime shifts” because of warming.

Stronger evidence than before

As well as these thematic changes in the use of language, the AR5 synthesis comes to stronger conclusions in many other areas.

This is largely because the scientific evidence has solidified in the intervening seven years, the IPCC says.

We’ve drawn together a collection of side-by-side statements so you can see for yourself how the conclusions have changed. Some of the shifts in language are subtle – but they are significant all the same.

IPCC Table With Logo

Source: IPCC AR4 Synthesis Report, draft AR5 Synthesis Report

Climate alarmism or climate realism?

The authors of the latest synthesis report seem to have made an effort to boost the impact of their words. They’ve used clearer and more direct language along with what appears to be a stronger emphasis on the negative consequences of inaction.

The language around relying on adaptation to climate change has also shifted. It now more clearly emphasises the need for mitigation to cut emissions, if the worst impacts of warming are to be avoided.

Some are bound to read this as an unwelcome excursion into advocacy. But others will insist it is simply a case of better presenting the evidence that was already there, along with advances in scientific knowledge.

Government representatives have the chance to go over the draft AR5 synthesis report with a fine toothcomb when they meet during 27-31 October.

Will certain countries try to tone down the wording, as they have been accused of doing in the past? Or will the new, more incisive language make the final cut?

To find out, tune in on 2 November when the final synthesis report will be published.

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (arxiv.org/abs/1407.7094).

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”

We speak as we feel, we feel as we speak (Science Daily)

Date: June 26, 2014

Source: University of Cologne – Universität zu Köln

Summary: Ground-breaking experiments have been conduced to uncover the links between language and emotions. Researchers were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa. The authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds.

Researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. Credit: Image courtesy of University of Cologne – Universität zu Köln 

A team of researchers headed by the Erfurt-based psychologist Prof. Ralf Rummer and the Cologne-based phoneticist Prof. Martine Grice has carried out some ground-breaking experiments to uncover the links between language and emotions. They were able to demonstrate that the articulation of vowels systematically influences our feelings and vice versa.

The research project looked at the question of whether and to what extent the meaning of words is linked to their sound. The specific focus of the project was on two special cases; the sound of the long ‘i’ vowel and that of the long, closed ‘o’ vowel. Rummer and Grice were particularly interested in finding out whether these vowels tend to occur in words that are positively or negatively charged in terms of emotional impact. For this purpose, they carried out two fundamental experiments, the results of which have now been published in Emotion, the journal of the American Psychological Association.

In the first experiment, the researchers exposed test subjects to film clips designed to put them in a positive or a negative mood and then asked them to make up ten artificial words themselves and to speak these out loud. They found that the artificial words contained significantly more ‘i’s than ‘o’s when the test subjects were in a positive mood. When in a negative mood, however, the test subjects formulated more ‘words’ with ‘o’s.

The second experiment was used to determine whether the different emotional quality of the two vowels can be traced back to the movements of the facial muscles associated with their articulation. Rummer and Grice were inspired by an experimental configuration developed in the 1980s by a team headed by psychologist Fritz Strack. These researchers instructed their test subjects to view cartoons while holding a pen in their mouth in such a way that either the zygomaticus major muscle (which is used when laughing and smiling) or its antagonist, the orbicularis oris muscle, was contracted. In the first case, the test subjects were required to place the pen between their teeth and in the second case between their lips. While their zygomaticus major muscle was contracted, the test subjects found the cartoons significantly more amusing. Instead of this ‘pen-in-mouth test’, the team headed by Rummer and Grice now conducted an experiment in which they required their test subjects to articulate an ‘i’ sound (contraction of the zygomaticus major muscle) or an ‘o’ sound (contraction of the orbicularis oris muscle) every second while viewing cartoons. The test subjects producing the ‘i’ sounds found the same cartoons significantly more amusing than those producing the ‘o’ sounds instead.

In view of this outcome, the authors concluded that it would seem that language users learn that the articulation of ‘i’ sounds is associated with positive feelings and thus make use of corresponding words to describe positive circumstances. The opposite applies to the use of ‘o’ sounds. And thanks to the results of their two experiments, Rummer and Grice now have an explanation for a much-discussed phenomenon. The tendency for ‘i’ sounds to occur in positively charged words (such as ‘like’) and for ‘o’ sounds to occur in negatively charged words (such as ‘alone’) in many languages appears to be linked to the corresponding use of facial muscles in the articulation of vowels on the one hand and the expression of emotion on the other.

Journal Reference:

  1. Ralf Rummer, Judith Schweppe, René Schlegelmilch, Martine Grice. Mood is linked to vowel type: The role of articulatory movements.Emotion, 2014; 14 (2): 246 DOI: 10.1037/a0035752

Rapid Language Evolution in 19th-century Brazil: Data Mining, Literary Analysis and Evolutionary Biology – A Study of Six Centuries of Portuguese-language Texts (Stanford University)

Reporter: Aviva Lev-Ari, PhD, RN

Stanford collaboration offers new perspectives on evolution of Brazilian language

Using a novel combination of data mining, literary analysis and evolutionary biology to study six centuries of Portuguese-language texts, Stanford scholars discover the literary roots of rapid language evolution in 19th-century Brazil.

L.A. Cicero Stanford biology Professor Marcus Feldman, left, and Cuahtemoc Garcia-Garcia, a graduate student in Iberian and Latin American Cultures, combined forces to investigate the evolution of Portuguese as spoken in Brazil.

Literature and biology may not seem to overlap in their endeavors, but a Stanford project exploring the evolution of written language in Brazil is bringing the two disciplines together.

Over the last 18 months, Iberian and Latin American Cultures graduate student Cuauhtémoc García-García and biology Professor Marcus Feldman have been working together to trace the evolution of the  Brazilian Portuguese language through literature.

By combining Feldman’s expertise in mathematical analysis of cultural evolution with García-García’s knowledge of Latin American culture and computer programming, they have produced quantifiable evidence of rapid historical changes in written Brazilian Portuguese in the 19th and 20th centuries.

Specifically, Feldman and García-García are studying the changing use of words in tens of thousands of texts, with a focus on the personal pronouns that Brazilians used to address one another.

Their digital analysis of linguistics development in literary texts reflects Brazil’s complex colonial history.

The change in the use of personal pronouns, a daily part of social and cultural interaction, formed part of an evolving linguistic identity that was specific to Brazil, and not its Portuguese colonizers.

“We believe that this fast transition in the written language was due primarily to the approximately 300-year prohibition of both the introduction of the printing press and the foundation of universities in Brazil under Portuguese rule,” García-García said.

What Feldman and García-García found was that spoken language did in fact evolve during those 300 years, but little written evidence of that process exists because colonial restrictions on printing and literacy prevented language development in the written form.

A national sentiment of “write as we speak” arose in Brazil after Portuguese rule ended. García-García said their data shows an abrupt introduction in written texts of the spoken pronouns that were developed during the 300-year colonization period.

Drawing on Feldman’s experience with theoretical and statistical evolutionary models, García-García developed computer programs that count certain words to see how often they appear and how their use has changed over hundreds of years.

In Brazilian literary works produced in the post-colonial period, Feldman said, they have “found examples of written linguistic evolution over short time periods, contrary to the longer periods that are typical for changes in language.”

The findings will figure prominently in García-García’s dissertation, which addresses the transmission of written language across time and space.

The project’s source materials include about 70,000 digitized works in Portuguese from the 13th to the 21st century, ranging from literature and newspapers to technical manuals and pamphlets.

García-García, a member of The Digital Humanities Focal Group at Stanford, said their research “shows how written language changed, and through these changes in pronoun use, we now have a better understanding of how Brazilian writing evolved following the introduction of the printing press.”

Feldman, a population geneticist and one of the founders of the quantitative theory of cultural evolution, said he sees their project as a natural approach to linguistic evolution.

“I believe that evolutionary science and the humanities have a lot to offer each other in both theoretical and empirical explorations,” Feldman said.

Language by the numbers

García-García became interested in language evolution while studying Brazilian Portuguese under the instruction of Stanford lecturer Lyris Wiedemann. He approached Feldman, proposing an evolutionary study of Brazilian Portuguese, and Feldman agreed to help him analyze the data. García-García then enlisted Stanford lecturer Agripino Silveira, who provided linguistic expertise.

García-García worked with Stanford Library curators Glen Worthey, Adan Griego and Everardo Rodriguez for more than a year to develop the technical infrastructure and copyright clearance he needed to access Stanford’s entire digitized corpus of Portuguese language texts. After incorporating even more source material from the HathiTrust digital archive, García-García began the time-consuming task of “cleaning” the corpus, so data could be effectively mined from it.

“Sometimes there were duplicates, issues with the digitization, and works with multiple editions that created ‘noise’ in the corpus,” he said.

Following months of preparation, Feldman and García-García were able to begin data mining. Specifically, they counted the incidences of two pronouns, tu and você, which both mean the singular “you,” and how their incidence in literature changed over time.

“After running various searches, I could correlate results and see how and when certain words were used to build up a comprehensive image of this evolution,” he said.

Tu was – and still is – used in Portugal as the typical way to say ‘you.’ But, in Brazil, você is the more normal way to say it, particularly in major cities like Rio de Janeiro and São Paulo where the majority of the population lives,” García-García explained.

However, that was not always the case. When Brazil was a Portuguese colony, and up until the arrival of the printing press in1808, tu was the canonical form in written language.

As part of the run-up to independence in 1822, universities and printing presses were established in Brazil for the first time in 1808, having been prohibited by the Portuguese colonizers in what García-García calls “cultural repression.”

By the late 19th century, você emerged as the way to address people, shedding part of the colonial legacy, and tu quickly became less prominent in written Brazilian Portuguese.

“Our findings quantifiably show how pronoun use developed. We have found that around 1840, vocêwas used about 10-15 percent of the time by authors to say ‘you.’ By the turn of the century, this had increased to about 70 percent,” García-García said.

“Our data suggest that você was rarely used in the late 17th and 18th centuries, but really appears and takes hold in the middle of the 19th century, a few decades after 1808. Thus, the late arrival of the printing press marks a critical point for understanding the evolution of written Portuguese in Brazil, ” he said.

From Romanticism to realism

Their research revealed an intriguing literary coincidence – the period of transition from tu to vocêcorrelated with the broad change in the dominant literary genre in Brazilian literature from European Romanticism to Latin American realism.

Interestingly, the researchers noticed that the rapid change was most evident several decades after Brazil’s independence in the 1820s because it took that long for Brazilian writers to develop their own voice and style.

For centuries Brazilian writers were forced to write in the style of the Portuguese, but as García-García said, “with their new freedom they wanted to write stories that reflected their national identity.”

“Machado de Assis, arguably Brazil’s greatest author, is a fine example. His early novels are archetypally Romanticist, and then his later novels are deeply Realist, and the use of the pronouns shift from one to the other,” García-García said.

Nonetheless, in Machado’s work there is sometimes a purposeful switch back to the tu form if, for example, the author wanted to evoke a certain sentiment or change the narrative voice.

“The data-mining project cannot ascertain subtle uses of words and how, in some works, the pronouns are ‘interchangeable,’” he added.

Computational expertise was no substitute for literary expertise, and García-García used the two disciplines in tandem to get a clearer picture in his data.

“I had to stop using the computer and go back to a close reading of a large sample of books, and the literary genre change reflects this period of post-colonial social and historical change,” he said.

Feldman and García-García hope to use their methodology to explore different languages.

“Next we hope to study the digitized Spanish language corpus, which currently comprises close to a quarter of a million works from the last 900 years,” García-García said.

Tom Winterbottom is a doctoral candidate in Iberian and Latin American Cultures at Stanford. For more news about the humanities at Stanford, visit the Human Experience.

http://news.stanford.edu/news/2014/june/evolution-language-brazil-060414.html

Talking Neanderthals challenge the origins of speech (Science Daily)

Date:

March 2, 2014

Source: University of New England

Summary: We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.

A model of an adult Neanderthal male head and shoulders on display in the Hall of Human Origins in the Smithsonian Museum of Natural History in Washington, D.C. Reconstruction based on the Shanidar 1 fossil (c. 80-60 kya). Credit: By reconstruction: John Gurche; photograph: Tim Evanson [CC-BY-SA-2.0], via Wikimedia Commons

We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research by an expert from the University of New England shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.

Pinpointing the origin and evolution of speech and human language is one of the longest running and most hotly debated topics in the scientific world. It has long been believed that other beings, including the Neanderthals with whom our ancestors shared Earth for thousands of years, simply lacked the necessary cognitive capacity and vocal hardware for speech.

Associate Professor Stephen Wroe, a zoologist and palaeontologist from UNE, along with an international team of scientists and the use of 3D x-ray imaging technology, made the revolutionary discovery challenging this notion based on a 60,000 year-old Neanderthal hyoid bone discovered in Israel in 1989.

“To many, the Neanderthal hyoid discovered was surprising because its shape was very different to that of our closest living relatives, the chimpanzee and the bonobo. However, it was virtually indistinguishable from that of our own species. This led to some people arguing that this Neanderthal could speak,” A/Professor Wroe said.

“The obvious counterargument to this assertion was that the fact that hyoids of Neanderthals were the same shape as modern humans doesn’t necessarily mean that they were used in the same way. With the technology of the time, it was hard to verify the argument one way or the other.”

However advances in 3D imaging and computer modelling allowed A/Professor Wroe’s team to revisit the question.

“By analysing the mechanical behaviour of the fossilised bone with micro x-ray imaging, we were able to build models of the hyoid that included the intricate internal structure of the bone. We then compared them to models of modern humans. Our comparisons showed that in terms of mechanical behaviour, the Neanderthal hyoid was basically indistinguishable from our own, strongly suggesting that this key part of the vocal tract was used in the same way.

“From this research, we can conclude that it’s likely that the origins of speech and language are far, far older than once thought.”

Journal Reference:

  1. Ruggero D’Anastasio, Stephen Wroe, Claudio Tuniz, Lucia Mancini, Deneb T. Cesana, Diego Dreossi, Mayoorendra Ravichandiran, Marie Attard, William C. H. Parr, Anne Agur, Luigi Capasso. Micro-Biomechanics of the Kebara 2 Hyoid and Its Implications for Speech in NeanderthalsPLoS ONE, 2013; 8 (12): e82261 DOI: 10.1371/journal.pone.0082261

Language and Tool-Making Skills Evolved at the Same Time (Science Daily)

Sep. 3, 2013 — Research by the University of Liverpool has found that the same brain activity is used for language production and making complex tools, supporting the theory that they evolved at the same time.

Three hand axes produced by participants in the experiment. Front, back and side views are shown. (Credit: Image courtesy of University of Liverpool)

Researchers from the University tested the brain activity of 10 expert stone tool makers (flint knappers) as they undertook a stone tool-making task and a standard language test.

Brain blood flow activity measured

They measured the brain blood flow activity of the participants as they performed both tasks using functional Transcranial Doppler Ultrasound (fTCD), commonly used in clinical settings to test patients’ language functions after brain damage or before surgery.

The researchers found that brain patterns for both tasks correlated, suggesting that they both use the same area of the brain. Language and stone tool-making are considered to be unique features of humankind that evolved over millions of years.

Darwin was the first to suggest that tool-use and language may have co-evolved, because they both depend on complex planning and the coordination of actions but until now there has been little evidence to support this.

Dr Georg Meyer, from the University Department of Experimental Psychology, said: “This is the first study of the brain to compare complex stone tool-making directly with language.

Tool use and language co-evolved

“Our study found correlated blood-flow patterns in the first 10 seconds of undertaking both tasks. This suggests that both tasks depend on common brain areas and is consistent with theories that tool-use and language co-evolved and share common processing networks in the brain.”

Dr Natalie Uomini from the University’s Department of Archaeology, Classics & Egyptology, said: “Nobody has been able to measure brain activity in real time while making a stone tool. This is a first for both archaeology and psychology.”

The research was supported by the Leverhulme Trust, the Economic and Social Research Council and the British Academy. It is published in PLOS ONE.

Journal Reference:

  1. Natalie Thaïs Uomini, Georg Friedrich Meyer. Shared Brain Lateralization Patterns in Language and Acheulean Stone Tool Production: A Functional Transcranial Doppler Ultrasound StudyPLoS ONE, 2013; 8 (8): e72693 DOI: 10.1371/journal.pone.0072693

Before Babel? Ancient Mother Tongue Reconstructed (Live Science)

Tia Ghose, LiveScience Staff Writer

06 May 2013, 03:00 PM ET

an old oil painting of the Tower of Babel.The idea of a universal human language goes back at least to the Bible, in which humanity spoke a common tongue, but were punished with mutual unintelligibility after trying to build the Tower of Babel all the way to heaven. Now scientists have reconstructed words from such a language. CREDIT: Pieter Brueghel the Elder (1526/1530–1569) 

The ancestors of people from across Europe and Asia may have spoken a common language about 15,000 years ago, new research suggests.

Now, researchers have reconstructed words, such as “mother,” “to pull” and “man,” which would have been spoken by ancient hunter-gatherers, possibly in an area such as the Caucusus. The word list, detailed today (May 6) in the journal Proceedings of the National Academy of Sciences, could help researchers retrace the history of ancient migrations and contacts between prehistoric cultures.

“We can trace echoes of language back 15,000 years to a time that corresponds to about the end of the last ice age,” said study co-author Mark Pagel, an evolutionary biologist at the University of Reading in the United Kingdom.

Tower of Babel

The idea of a universal human language goes back at least to the Bible, in which humanity spoke a common tongue, but were punished with mutual unintelligibility after trying to build the Tower of Babel all the way to heaven. [Image Gallery: Ancient Middle-Eastern Texts]

But not all linguists believe in a single common origin of language, and trying to reconstruct that language seemed impossible. Most researchers thought they could only trace a language’s roots back 3,000 to 4,000 years. (Even so, researchers recently said they had traced the roots of a common mother tongue to many Eurasian languages back 8,000 to 9,500 years to Anatolia, a southwestern Asian peninsula that is now part of Turkey.)

Pagel, however, wondered whether language evolution proceeds much like biological evolution. If so, the most critical words, such as the frequently used words that define our social relationships, would change much more slowly.

To find out if he could uncover those ancient words, Pagel and his colleagues in a previous study tracked how quickly words changed in modern languages. They identified the most stable words. They also mapped out how different modern languages were related.

They then reconstructed ancient words based on the frequency at which certain sounds tend to change in different languages — for instance, p’s and f’s often change over time in many languages, as in the change from “pater” in Latin to the more recent term “father” in English.

The researchers could predict what 23 words, including “I,” “ye,” “mother,” “male,” “fire,” “hand” and “to hear” might sound like in an ancestral language dating to 15,000 years ago.

In other words, if modern-day humans could somehow encounter their Stone Age ancestors, they could say one or two very simple statements and make themselves understood, Pagel said.

Limitations of tracing language

Unfortunately, this language technique may have reached its limits in terms of how far back in history it can go.

“It’s going to be very difficult to go much beyond that, even these slowly evolving words are starting to run out of steam,” Pagel told LiveScience.

The study raises the possibility that researchers could combine linguistic data with archaeology and anthropology “to tell the story of human prehistory,” for instance by recreating ancient migrations and contacts between people, said William Croft, a comparative linguist at the University of New Mexico, who was not involved in the study.

“That has been held back because most linguists say you can only go so far back in time,” Croft said. “So this is an intriguing suggestion that you can go further back in time.”

Cracking the Semantic Code: Half a Word’s Meaning Is 3-D Summary of Associated Rewards (Science Daily)

Feb. 13, 2013 — We make choices about pretty much everything, all the time — “Should I go for a walk or grab a coffee?”; “Shall I look at who just came in or continue to watch TV?” — and to do so we need something common as a basis to make the choice.

Half of a word’s meaning is simply a three dimensional summary of the rewards associated with it, according to an analysis of millions of blog entries. (Credit: © vlorzor / Fotolia)

Dr John Fennell and Dr Roland Baddeley of Bristol’s School of Experimental Psychology followed a hunch that the common quantity, often referred to simply as reward, was a representation of what could be gained, together with how risky and uncertain it is. They proposed that these dimensions would be a unique feature of all objects and be part of what those things mean to us.

Over 50 years ago, psychologist Charles Osgood developed an influential method, known as the ‘semantic differential’, that attempts to measure the connotative, emotional meaning of a word or concept. Osgood found that about 50 per cent of the variation in a large number of ratings that people made about words and concepts could be captured using just three summary dimensions: ‘evaluation’ (how nice or good the object is), ‘potency’ (how strong or powerful an object is) and ‘activity’ (whether the object is active, unpredictable or chaotic). So, half of a concept’s meaning is simply a measure of how nice, strong, and active it is. The main problem is that, until now, no one knew why.

Dr Baddeley explained: “Over time, we keep a running tally of all the good and bad things associated with a particular object. Later, when faced with a decision, we can simply choose the option that in the past has been associated with more good things than bad. This dimension of choice sounds very much like the ‘evaluation’ dimension of the semantic differential.”

To test this, the researchers needed to estimate the number of good or bad things happening. At first sight, estimating this across a wide range of contexts and concepts seems impossible; someone would need to be observed throughout his or her lifetime and, for each of a large range of contexts and concepts, the number of times good and bad things happened recorded. Fortunately, a more practical solution is provided by the recent phenomenon of internet blogs, which describe aspects of people’s lives and are also searchable. Sure enough, after analysing millions of blog entries, the researchers found that the evaluation dimension was a very good predictor of whether a particular word was found in blogs describing good situations or bad.

Interestingly, they also found that how frequently a word was used was also a good predictor of how much we like it. This is a well-known effect — the ‘mere exposure effect’ — and a mainstay of the multi-billion dollar advertising industry. When comparing two options we just choose the option we like the most — and we like it because in the past it has been associated with more good things.

Analysing the data showed that ‘potency’ was a very good predictor of the probability of bad situations being associated with a given object: it measured one kind of risk.

Dr Fennell said: “This kind of way of quantifying risk is called ‘value at risk’ in financial circles, and the perils of ignoring it have been plain to see. Russian Roulette may be, on average, associated with positive rewards, but the risks associated with it are not for everyone!”

It is not the only kind of risk, though. In many situations, ‘activity’ — that is, unpredictability, or more importantly uncontrollability — is a highly relevant measure of risk: a knife in the hands of a highly trained sushi chef is probably safe, a knife in the hands of a drunk, erratic stranger is definitely not.

Dr Fennell continued: “Again, this different kind of risk is relevant in financial dealings and is often called volatility. It seems that the mistake that was made in the credit crunch was not ignoring this kind of risk, but to assume that you could perfectly guess it based on how unpredictable it had been in the past.”

Thus, the researchers propose that half of meaning is simply a summary of how rewarding, and importantly, how much of two kinds of risk is associated with an object. Being sensitive not only to rewards, but also to risks, is so important to our survival, that it appears that its representation has become wrapped up in the very nature of the language we use to represent the world.

Journal Reference:

  1. John G. Fennell, Roland J. Baddeley. Reward Is Assessed in Three Dimensions That Correspond to the Semantic DifferentialPLoS ONE, 2013; 8 (2): e55588 DOI:10.1371/journal.pone.0055588

Juridiquês (Sopro 83)


Juridiquês
 Alexandre Nodari


Se tivesse sido possível construir a torre de Babel sem escalá-la até o topo, ela teria sido permitida
(Kafka)

1. Tramita no Congresso Nacional um projeto de lei, de autoria de Maria do Rosário, que pretende acrescer ao artigo 458 do Código de Processo Civil, que diz respeito aos “requisitos essenciais da sentença”, um quarto inciso, tornando obrigatória “a reprodução do dispositivo da sentença em linguagem coloquial, sem a utilização de termos exclusivos da Linguagem técnico-jurídica e acrescida das considerações que a autoridade Judicial entender necessárias, de modo que a prestação jurisdicional possa ser plenamente compreendida por qualquer pessoa do povo”. É evidente que a proposta visa ampliar o acesso à Justiça e tem intenção democratizadora. Todavia, se, por si só, o projeto parece ser razoável, confrontado com a torrente de leis ou projetos de lei que visam regular cada aspecto da vida humana, do cigarro à linguagem (há poucos anos, o comunista-ruralista Aldo Rebelo tentou banir os estrangeirismos do português), não há como não termos uma postura ao menos cética diante dele. Se o projeto em si pode ser bom, contextualizado com a inflação normativa que visa purificar cada aspecto da vida humana, não há como não termos ressalvas. O desejo de limpeza, de higienização, de clareza, atravessa a sociedade como um todo – e tal desejo atende a anseios do poder, ou, pelo menos, é canalizado por ele. Dominique Laporte, em sua História da merda, lembra que foi no mesmo ano de 1539 que a França: 1) primeiro obrigou que as leis, os atos administrativos, os processos judiciais e os documentos notariais, fossem redigidos em vernáculo, eliminando as ambigüidades e incertezas do latim, e possibilitando a “clareza”; 2) e, logo a seguir, proibiu que os cidadãos jogassem na rua seus excrementos – suas fezes e suas urinas. Limpar a linguagem e limpar a cidade: a centralização do poder que daria naquilo que chamamos vulgarmente de absolutismo tem suas raízes nessa vontade de pureza e limpeza, nesse ideal cristalino. Todavia, para além desse “desejo de clareza”, é interessante atentarmos para uma espécie de ato falho contido na “Justificação” do projeto de lei; talvez não seja, de fato, um ato falho, mas algo intencional, o que pouco importa. O parágrafo final da justificativa fala em “tradução para o vernáculo comum do texto técnico da sentença judicial”, como se as sentenças não fossem escritas em português. Há aí uma verdade essencial sobre o Direito: ele é uma linguagem diferente do “vernáculo comum”. Na famosa Apologia de Sócrates, o velho sábio, ao falar diante do tribunal que o acusava de impiedade, diz ser “um estrangeiro à língua” que ali se fala, e pede pra ser tratado como se fosse um estrangeiro que não sabe o grego. O Direito não é uma língua estrangeira como o inglês ou o latim são em relação ao português ou ao grego: o Direito é a língua portuguesa ou grega em outro regime de funcionamento. Diante do Direito pátrio, somos como estrangeiros que não conhecem a própria língua. Mas qual é o regime de funcionamento daquela linguagem que atende, no “vernáculo comum”, pelo nome de “juridiquês”?

2. Em um belíssimo texto sobre a figura do notário, Salvatore Satta, um dos juristas mais brilhantes do século XX, resumiu o “drama” do escrivão ou escrevente, esses mediadores entre os plebeus e os juristas, do seguinte modo: “Conhecer o querer que aquele que quer não conhece”. Não é que “aquele que quer” não conheça o seu querer; “aquele que quer” não sabe traduzi-lo juridicamente. Ou seja, continua Satta, o que o notário faz, de fato, é “reduzir a vontade da parte enquanto vontade do ordenamento”. Eis o sentido do brocardo latino Da mihi factum, dabo tibi jus (“Exponha o fato e te direi o direito”): reduzir a “volição em vista de um escopo prático que a parte se propõe a atingir enquanto vontade jurídica e juridicamente tipificada”, ou seja, traduzir uma vontade, um fato, um ato da vida, em tipos jurídicos. O Direito não lida propriamente com fatos ou atos, mas com fatos ou atos jurídicos, que correspondam a certos tipos previstos. Passar um ato ou fato da vida ao Direito é tipificá-lo. Nesse sentido, o tipo talvez seja o elemento gramatical básico da linguagem jurídica. Mas o que exatamente é um tipo? Quem melhor refletiu sobre a noção de “tipo” não foi um jurista, mas um sociólogo, Max Weber, sedimentando, com os chamados “tipo ideais”, seu método em oposição ao método empírico-comparatista de Durkheim. Para Weber, os tipos puros ou ideais não poderiam ser encontrados “na realidade”; o que existia “de fato” era sempre um compósito, mais ou menos híbrido, de tipos que – e daí a sua natureza circular – se construíam a partir de elementos dispersos nesta mesma “realidade” em que eram aplicados. A própria etimologia de tipo já indica este seu caráter ambíguo, entre a empiria e a abstração: o gregotypos significa imagem, vestígio, rastro, ou seja, ausência, índice de uma presença imemorial. Para usar um exemplo de Vilém Flusser: os “typoi são como vestígios que os pés de um pássaro deixam na área da praia. Então, a palavra significa que esses vestígios podem ser utilizados como modelos para classificação do pássaro mencionado”. As duas formas de Direito que o Ocidente conhece são as duas facetas do tipo: a de matriz romano-gerâmica baseia-se nas leis, na abstração, no tipo, para chegar ao caso empírico; e a Common Law, ao contrário, parte dos casos empíricos para convertê-los em típicos, em abstratos. Mas, como diz Satta, na tipificação, há uma redução, algo se perde – inclusive a linguagem comum.

3. O tipo atende a uma necessidade básica do funcionamento do Direito, e domodus operandi de sua linguagem específica (ou típica): a prescrição. “Se” acontece ou está presente o tipo X, “então” a conseqüência, a sanção, é Y. O problema de todo processo reside em saber se o acontecimento A da vida corresponde ou não ao tipo X para que a conseqüência Y se dê. Como as normas se fundamentam em tipos, que não passam de linguagem sem relação necessária com as coisas e os fatos da vida, é preciso uma construção discursiva que conecte o acontecimento da vida ao tipo jurídico – se o Direito fosse pura subsunção, lembra Giorgio Agamben, poderíamos abdicar desse imenso aparato judicial chamado processo, e que envolve não só o juiz, o advogado e o promotor, mas inúmeros outros mediadores entre a linguagem comum e a linguagem jurídica (o notário, o taquígrafo, etc.). Por isso, para que se dê essa tipificação, não só o fato relevante juridicamente precisa passar à forma de tipo, como também tudo aquilo que o cerca, para que haja a redução da singularidade à tipificação, ou seja, à reprodução daquele caso típico (na forma de jurisprudência). Sabemos bem como isso funciona: dos boletins de ocorrência até as sentenças, os fatos da vida são narrados em uma linguagem que os torna típicos, abstratos – e reprodutíveis. Ítalo Calvino sintetizou de forma magistral esse “inquietante” processo de tradução:


O escrivão está diante da máquina de escrever. O interrogado, sentado em frente a ele, responde às perguntas gaguejando ligeiramente, mas preocupado em dizer, com a maior exatidão possível, tudo o que tem de dizer e nem uma palavra a mais: “De manhã cedo, estava indo ao porão para ligar o aquecedor quando encontrei todos aqueles frascos de vinho atrás da caixa de carvão. Peguei um para tomar no jantar. Não estava sabendo que a casa de bebidas lá em cima havia sido arrombada”. Impassível, o escrivão bate rápido nas teclas sua fiel transcrição: “O abaixo assinado, tendo se dirigido ao porão nas primeiras horas da manhã para dar início ao funcionamento da instalação térmica, declara ter casualmente deparado com boa quantidade de produtos vinícolas, localizados na parte posterior do recepiente destinado ao armazenamento do combustível, e ter efetuado a retirada de um dos referidos artigos com a intenção de consumi-lo durante a refeição vespertina, não estando a par do acontecido arrombamento do estabelecimento comercial sobranceiro.”

Calvino chamou a isso de “terror semântico”, ou “antilíngua”: “a fuga diante de cada vocábulo que tenha por si só um significado” – o perigo, a seu ver, era que essa “antilíngua” invadisse a vida comum. Mas nessa fuga diante do vocábulo que tenha por si só um significado, há um avanço para os vocábulos que abranjam mais de um significado, que podem, portanto, ser reproduzidos em várias situações. Essa reprodutibilidade é, como já sublinhamos, essencial à linguagem baseada em tipos – é ela que diferencia, segundo Flusser, a noção de tipo da noção de caractere, que privilegia aquilo que é característico, isto é, próprio.

4. Portanto, o tipo, como elemento básico da gramática jurídica, serve para tornar reprodutíveis as normas diante da singularidade dos acontecimentos da vida; mas, para tanto, ele abstrai (d)esses acontecimentos. Os processos e as normas, compostos de inúmeros tipos, correm, desse modo, ao largo da vida, como se fossem uma narrativa ficcional. O grande romanista Yan Thomas argumenta que “a ficção é um procedimento que (…) pertence à pragmática do direito”. Os antigos romanos, continua Thomas, não tinham pudor em, diante de uma situação excepcional na qual não queriam fazer uma determinada regra, optar por mudar juridicamente a situação no lugar de alterar a regra. Um exemplo, dentre muitos: buscando tornar válidos os testamentos de alguns cidadãos que haviam morrido quando se encontravam sob custódia dos inimigos, o que, por lei, invalidava tais testamentos, a Lex Cornelia, de 81 a.C., optou por criar uma ficção, da qual conhecemos duas versões: 1) a primeira, uma ficção positiva, era considerar os testamentos como se os cidadãos haveriam morrido sob o estatuto normal da cidadania; 2) e a segunda, uma ficção negativa, pela qual os testamentos eram válidos como se os cidadãos não tivessem morrido sob o poder do inimigo. Por que esse afastamento discursivo da “realidade”, da vida? Por que, na narrativa, ou na sua forma, o Direito se afasta do relato comum, cria uma outra realidade, quase uma dimensão paralela? Aqui entra o segundo elemento da linguagem prescricional que caracteriza o Direito, a sanção, o “então Y”. A função do Direito, como sabemos, é alterar, pela linguagem, pela palavra, a realidade, a vida, ou seja, criar palavras eficazes – nem que para garantir a eficácia de uma lei ou de uma sentença seja preciso usar da força pública. (Aliás, não há vernáculo comum o suficiente capaz de explicar a “qualquer pessoa do povo” que aquela sentença que lhe dá ganho de causa ainda precisa ser executada, em um procedimento que demorará mais alguns anos). É dessa função do Direito de alterar a realidade pela linguagem que nasce a ilusão retrospectiva de que haveria um estágio pré-jurídico em que religião, magia e direito coincidiriam. Na verdade, o que o Direito e a Magia partilham é do mesmomodus operandi da linguagem, o performativo (“eu juro”, “eu te condeno”, “eu prometo”), em que, nas palavras de Agamben, “o significado de uma enunciação (…) coincide com a realidade que é ela mesma produzida pelo ato da enunciação”. Nesse sentido, o Direito é, ainda hoje, mágico. O gosto dos juristas pela linguagem ornamental, pelos brocardos, pela linguagem ritual e pelo eufemismo, provem dessa ligação: a realidade pode ser criada a partir de uma linguagem vazia (ou esvaziada, afastada da realidade). Poderíamos, portanto, dizer que o Direito é, ao mesmo tempo, o saber quase mágico deste modus operandi, e aquilo que garante que tal linguagem performativa se transforme em ato – que os contratos sejam cumpridos, que as leis sejam aplicadas, etc. Todavia, para que o Direito opere magicamente sobre a realidade, ele precisa se afastar dela; para que sua linguagem produza efeitos sobre a vida, ela deve se afastar da linguagem que comunica ou que expressa, o “vernáculo comum”.

5. Portanto, talvez o “juridiquês” não seja (apenas) uma prática judiciária que remonta ao bacharelismo e à pseudo-erudição, um resquício antigo que pode ser removido. Antes, talvez ele seja uma prática judiciária constitutiva daquilo que conhecemos por Direito. Emile Benveniste, ao se deter no fato de que o verbo latino iurare (jurar) é o correspondente ao substantivo ius, que estamos habituados a traduzir por “direito”, argumenta que ius deveria, na verdade, significar “a fórmula da conformidade”: “ius, em geral, é realmente uma fórmula, e não um conceito abstrato”. É interessante notar que Benveniste aponta no ius do direito romano este caráter “mágico” que viemos assinalando, em que há separação da linguagem comum e produção de efeitos sobre a realidade – e mostra ainda que tal caráter estaria presente naquele documento que os juristas costumam considerar uma das pedras basilares do direito ocidental, a Lei das XII Tábuas. Diz Benveniste: “iura é a coleção das sentenças de direito. (…) Essesiura (…) são fórmulas que enunciam uma decisão de autoridade; e sempre que esses termos [ius iura] são tomados em seu sentido estrito, encontramos (…) a noção de textos fixados, de fórmulas estabelecidas, cuja posse é o privilégio de certos indivíduos, certas famílias, certas corporações. O tipo exemplar dessesiura é representado pelo código mais antigo de Roma, a Lei das XII Tábuas, originalmente composta por sentenças formulando o estado de ius e pronunciando: ita ius esto. Aqui é o império da palavra, manifestado por termos de sentido concordante; em latim iu-dex. (…) Não é o fazer, e sim, sempre, opronunciar que é constitutivo do ‘direito’: ius dicereiu-dex nos reconduzem a essa ligação constante. (…) É por intermédio deste ato de fala ius dicere que se desenvolve toda a terminologia da via judiciária: iudex, iudicare, iudicium, iuris-dictio, etc.” Assim, o tipo, a tipificação, é um dos modos pelos quais a linguagem se converte em fórmula. O funcionamento formulário da linguagem no Direito, o afastamento total com a linguagem ordinária, pode ser melhor vista naqueles crimes relacionados justamente à linguagem. Dois exemplos, um da antiguidade e um muito recente podem demonstrar como isso diz respeito à própria lógica do Direito. O primeiro é do famoso orador grego Lísias, que viveu na passagem entre os séculos V e IV a.C. Em seu discurso Contra Theomnestus, Lísias argumenta que a lei contra a calúnia era inócua, na medida em que proibia que se chamasse alguém de “assassino” (androfonon), mas era incapaz de punir aquele que, como Theomnestus, acusava outrem de “matar” (apektonenai) seu pai. O outro caso ocorreu em março de 2010, no Supremo Tribunal Federal. Argumentando contra as cotas, o ex-senador Demóstenes Torres disse que as “negras (escravas) mantinham ‘relações consensuais’ com os brancos (seus patrões)”. Que consensualidade, podemos perguntar, é possível haver entre sujeitos que estão numa relação de senhor e escravo?  Porém, é evidente que nenhum dos 11 magistrados de “reputação ilibada” e “notável saber jurídico” viu racismo aí. Se o argumento tivesse sido enunciado de outra forma (com referência a uma “natural concupiscência” das negras, para dar um exemplo da nefasta tradição racista do Judiciário brasileiro), talvez acarretasse em uma ocorrência jurídica de racismo. Para que algo se inscreva na esfera do Direito, ele precisa se formalizar, ou melhor, se formularizar, se tornar fórmula. Não se trata aqui apenas de inscrição na legislação, em uma lei elaborada pelo Poder Legislativo. O Direito pode existir – e continuar calcado no formalismo – mesmo ali onde não há lei em sentido estrito, o que é provado pelo Direito costumeiro. A formalização é um processo maior do que a lei, e engloba  toda a máquina judiciária, o que inclui juízes, decisões judiciais, advogados, juristas, a chamada “doutrina”, chegando até à sociedade. Trata-se da fixação de conteúdos permitidos ou proibidos em fórmulas, procedimento que, como vimos com os tipos, permite sua reprodução. Esse é o paradoxo do que se costuma chamar, em geral pejorativamente, de “politicamente correto”: ao mesmo tempo que produz avanços materiais inegáveis, está limitado à própria formalidade. Ou seja, as fórmulas – aquilo que (não) se pode fazer ou dizer – repercutem sobre o mundo, modificam o mundo, mas elas não perdem a sua dimensão de fórmulas. Aqueles que defendem o Direito como um mecanismo de transformação social (ou mesmo só como uma ferramenta progressista), mais cedo ou mais tarde esbarram nesse paradoxo: o Direito só garante aquilo que está consubstanciado em fórmulas (e são justamente fórmulas que, por vezes, impedem a transformação social). A partir do momento que se defende o reconhecimento jurídico de certos direitos que o Direito não reconhece, se está defendendo a formalização desses direitos. De fato, a oposição entre direito material e direito formal é inócua: na medida em que a formalização dos direitos é um processo histórico, todo direito formal já foi apenas um direito material, e pode voltar a sê-lo. Ninguém é condenado por emitir discursos de conteúdos racistas (matéria) – só existe o crime de racismo quando este é enunciado de uma certa forma, por uma certa fórmula.

6. Todo jurista conhece a “pirâmide” normativa de Hans Kelsen, em que as normas são ordenadas hierarquicamente (os estratos mais baixos retiram sua validade dos mais altos), e no topo da qual está a “norma fundamental”. O problema, como se sabe, é que essa norma fundamental seria vazia de conteúdo, isto é, pressuposta, imaginária, ficcional (para postular o estatuto da norma fundamental, Kelsen se baseou na Filosofia do como se, de Vaihinger, para o qual até mesmos o discurso científico residia, em última instância, sobre alguma ficção). Ou seja, uma maneira de dar validade ao sistema, de remetê-lo ao Um (ainda que alguns queiram ligá-la ao princípio de que os pactos devem ser cumpridos – pacta sunt servanda –, e outros, muito mais tacanhos, à Constituição). Teríamos, assim, um sistema de normas com conteúdo baseadas numa norma sem conteúdo e fictícia. Talvez, porém, fosse mais produtivo entender o Direito de maneira invertida: um sistema de normas vazias, baseadas numa única norma com conteúdo: o de que a ficção que conhecemos como Direito é verdadeira. No momento histórico atual, poderíamos dizer que tal norma fundamental se cristalizaria em dois princípios: o de que não se pode alegar desconhecimento da lei (fechamento), e o de que o juiz não pode se furtar de decidir uma causa (abertura). Ou seja, o conteúdo da norma fundamental seria o de que o Direito é um sistema, ao mesmo tempo (mas não paradoxalmente), aberto e fechado – o que quer dizer: potencialmente Total. Fechamento e disseminação são conexos no Direito. Para que seja “verdadeiro”, ele não pode assumir seu estatuto de pura linguagem, ou melhor, tem que anulá-lo, dotando toda linguagem de uma potencial “eficácia”. Como as normas e os processos não passam de linguagem sem relação necessária com as coisas, é preciso este princípio que estabelece que alguma relação entre as palavras (normas) e as coisas (fatos) tem que se dar. É desse caráter vazio das normas e dos processos, do seu embasamento na linguagem (e não nas coisas) que deriva a inflação normativa, processo inerente ao Direito. As normas e os processos não passam, no fundo, de fórmulas que se invocam para tentar estabelecer este ou aquele nexo entre as palavras e as coisas – mas todas invocam, como pressuposto, o próprio nome do Direito, isto é, a norma fundamental: a de que a ficção é verdadeira. Portanto, as fórmulas, os tipos, os brocardos, em suma, o juridiquês, são o modo pelo qual se mantém a ficção, e pelo qual a vida, a linguagem comum, é capturada na esfera do Direito, ao mesmo tempo em que é afastada dela.  Nas ficções de Kafka, é comum o confronto, e mesmo o entrelaçamento, entre ficção e direito. O inacabado romance O processo encena bem este confronto e entrelaçamento. Ao início do romance, quando os oficiais da lei vão deter o protagonista K., este imagina se tratar apenas de uma trupe teatral aplicando um trote de aniversário a pedido de amigos. Ao final, quando seus executores chegam para buscá-lo, K. novamente quer acreditar que são apenas de atores encenando e pregando-lhe uma peça. E, de fato, todo o aparato judicial narrado no romance parece ser uma grande ficção: porões obscuros, audiências em cortiços, advogados moribundos. Em nenhum momento aparece a Lei, K. não consegue adentrar a Lei. Em nenhum momento, K. sabe do que está sendo acusado. O romance inteiro é construído sobre a figura dos mediadores – cartorários, advogados, oficiais – que encenam um grandiloqüente e patético processo, uma ficção da qual K. pode a qualquer momento sair. O Direito e o processo são apenas grandes narrativas ficcionais – mas estas encenações, ao contrário das teatrais, tomam vidas. O juridiquês é e não é apenas uma encenação de alguns juristas. É apenas o modo de narrar uma ficção; mas essa ficção atende pelo nome de Direito, que captura e reduz a vida, retirando a sua singularidade e reproduzindo-a como um tipo. Ao “se” da prescrição jurídica, corresponde um “então”. Um “então” que está ausente na verdadeira ficção, que é sempre e apenas um “como se”.

Language and China’s ‘Practical Creativity’ (N.Y.Times)

 

AUGUST 22, 2012

By DIDI KIRSTEN TATLOW

Every language presents challenges — English pronunciation can be idiosyncratic and Russian grammar is fairly complex, for example — but non-alphabetic writing systems like Chinese pose special challenges.

There is the well-known issue that Chinese characters don’t systematically map to sounds, making both learning and remembering difficult, a point I examine in my latest column. If you don’t know a character, you can’t even say it.

Nor does Chinese group individual characters into bigger “words,” even when a character is part of a compound, or multi-character, word. That makes meanings ambiguous, a rich source of humor for Chinese people.

Consider this example from Wu Wenchao, a former interpreter for the United Nations based in Hong Kong. On his blog he has a picture of mobile phones’ being held under a hand dryer. Huh?

The joke is that the Chinese word for hand dryer is composed of three characters, “hong shou ji” (I am using pinyin, a system of Romanization used in China, to “write” the characters in the English alphabet.)

Group them as “hongshou ji” and it means “hand dryer.” Group them as “hong shouji” and it means “dry the mobile phone.” (A shouji is a mobile phone.)

Good fodder for serious linguists and amateur language lovers alike. But does a character script also exert deeper effects on the mind?

William C. Hannas is one of the most provocative writers on this today. He believes character writing systems inhibit a type of deep creativity — but that its effects are not irreversible.

He is at pains to point out that his analysis is not race-based, that people raised in a character-based writing system have a different type of creativity, and that they may flourish when they enter a culture that supports deep creativity, like Western science laboratories.

Still, “The rote learning needed to master Chinese writing breeds a conformist attitude and a focus on means instead of ends. Process rules substance. You spend more time fidgeting with the script than thinking about content,” Mr. Hannas wrote to me in an e-mail.

But Mr. Hannas’s argument is indeed controversial — that learning Chinese lessens deep creativity by furthering practical, but not abstract, thinking, as he wrote in “The Writing on the Wall: How Asian Orthography Curbs Creativity,” published in 2003 and reviewed by The New York Times.

It’s a touchy topic that some academics reject outright and others acknowledge, but are reluctant to discuss, as Emily Eakin wrote in the review.

How does it work?

“Alphabets used in the West foster early skills in analysis and abstract thinking,” wrote Mr. Hannas, emphasizing the views were personal and not those of his employer, the U.S. government.

They do this by making readers do two things: breaking syllables into sound segments and clustering these segments into bigger, abstract, flexible sound units.

Chinese characters don’t do that. “The symbols map to syllables — natural concrete units. No analysis is needed and not much abstraction is involved,” Mr. Hannas wrote.

But radical, “type 2” creativity — deep creativity — depends on being able to match abstract patterns from one domain to another, essentially mapping the skills that alphabets nurture, he continued. “There is nothing comparable in the Sinitic tradition,” he wrote.

Will this inhibit China’s long-term development? Does it mean China won’t “take over the world,” as some are wondering? Not necessarily, Mr. Hannas said.

“You don’t need to be creative to succeed. Success goes to the early adapter and this is where China excels, for two reasons,” he wrote. First, Chinese are good at improving existing models, a different, more practical type of creativity, he wrote, adding that this practicality was noted by the British historian of Chinese science, Joseph Needham.

Yet there is a further step to this argument, and this is where Mr. Hannas’s ideas become explosive.

Partly as a result of these cultural constraints, China has built an “absolutely mind-boggling infrastructure” to get hold of cutting-edge foreign technology — by any means necessary, including large-scale, apparently government-backed, computer hacking, he wrote.

For more on that, see a hard-hitting Bloomberg report, “Hackers Linked to China’s Army seen from E.U to D.C.”

Non-Chinese R.&D. gets “outsourced” from its place of origin, “while China reaps the gain,” Mr. Hannas wrote, adding that many people believed this was “normal business practice.”

“In fact, it’s far from normal. The director of a U.S. intelligence agency has described China’s informal technology acquisition as ‘the greatest transfer of wealth in history,’ which I regard as a polite understatement,” he said.

Mr. Hannas has co-authored a book on this, to appear in the spring. It promises to shake things up. Watch this space.

Irony Seen Through the Eye of MRI (Science Daily)

ScienceDaily (Aug. 3, 2012) — In the cognitive sciences, the capacity to interpret the intentions of others is called “Theory of Mind” (ToM). This faculty is involved in the understanding of language, in particular by bridging the gap between the meaning of the words that make up a statement and the meaning of the statement as a whole.

In recent years, researchers have identified the neural network dedicated to ToM, but no one had yet demonstrated that this set of neurons is specifically activated by the process of understanding of an utterance. This has now been accomplished: a team from L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements.

Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.

In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is saidand what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Within the past 10 years, researchers in cognitive neuroscience have identified a neural network dedicated to ToM that includes specific areas of the brain: the right and left temporal parietal junctions, the medial prefrontal cortex and the precuneus. To identify this network, the researchers relied primarily on non-verbal tasks based on the observation of others’ behavior[1]. Today, researchers at L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) have established, for the first time, the link between this neural network and the processing of implicit meanings.

To identify this link, the team focused their attention on irony. An ironic statement usually means the opposite of what is said. In order to detect irony in a statement, the mechanisms of ToM must be brought into play. In their experiment, the researchers prepared 20 short narratives in two versions, one literal and one ironic. Each story contained a key sentence that, depending on the version, yielded an ironic or literal meaning. For example, in one of the stories an opera singer exclaims after a premiere, “Tonight we gave a superb performance.” Depending on whether the performance was in fact very bad or very good, the statement is or is not ironic.

The team then carried out functional magnetic resonance imaging (fMRI) analyses on 20 participants who were asked to read 18 of the stories, chosen at random, in either their ironic or literal version. The participants were not aware that the test concerned the perception of irony. The researchers had predicted that the participants’ ToM neural networks would show increased activity in reaction to the ironic sentences, and that was precisely what they observed: as each key sentence was read, the network activity was greater when the statement was ironic. This shows that this network is directly involved in the processes of understanding irony, and, more generally, in the comprehension of language.

Next, the L2C2 researchers hope to expand their research on the ToM network in order to determine, for example, whether test participants would be able to perceive irony if this network were artificially inactivated.

Note:

[1] For example, Grèzes, Frith & Passingham (J. Neuroscience, 2004) showed a series of short (3.5 second) films in which actors came into a room and lifted boxes. Some of the actors were instructed to act as though the boxes were heavier (or lighter) than they actually were. Having thus set up deceptive situations, the experimenters asked the participants to determine if they had or had not been deceived by the actors in the films. The films containing feigned actions elicited increased activity in the rTPJ (right temporal parietal junction) compared with those containing unfeigned actions.

Journal Reference:

Nicola Spotorno, Eric Koun, Jérôme Prado, Jean-Baptiste Van Der Henst, Ira A. Noveck. Neural evidence that utterance-processing entails mentalizing: The case of ironyNeuroImage, 2012; 63 (1): 25 DOI:10.1016/j.neuroimage.2012.06.046

It’s Even Less in Your Genes (The New York Review of Books)

MAY 26, 2011
Richard C. Lewontin

The Mirage of a Space Between Nature and Nurture
by Evelyn Fox Keller
Duke University Press, 107 pp., $64.95; $18.95 (paper)

In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.

That tendency, as Evelyn Fox Keller’s new book suggests, is one of the most powerful influences on our scientific understanding. As we change our intent, also we identify anew what is object and what is background. When I glance out the window as I write these lines I notice my neighbor’s car, its size, its shape, its color, and I note that it is parked in a snow bank. My interest then changes to the results of the recent storm and it is the snow that becomes my object of attention with the car relegated to the background of shapes embedded in the snow. What is an object as opposed to background is a mental construct and requires the identification of clear boundaries. As one of my children’s favorite songs reminded them:

You gotta have skin.
All you really need is skin.
Skin’s the thing that if you’ve got it outside,
It helps keep your insides in.
Organisms have skin, but their total environments do not. It is by no means clear how to delineate the effective environment of an organism.

One of the complications is that the effective environment is defined by the life activities of the organism itself. “Fish gotta swim and birds gotta fly,” as we are reminded by yet another popular lyric. Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.

The vulgarization of Darwinism that sees the “struggle for existence” as nothing but the competition for some environmental resource in short supply ignores the large body of evidence about the actual complexity of the relationship between organisms and their resources. First, despite the standard models created by ecologists in which survivorship decreases with increasing population density, the survival of individuals in a population is often greatest not when their “competitors” are at their lowest density but at an intermediate one. That is because organisms are involved not only in the consumption of resources, but in their creation as well. For example, in fruit flies, which live on yeast, the worm-like immature stages of the fly tunnel into rotting fruit, creating more surface on which the yeast can grow, so that, up to a point, the more larvae, the greater the amount of food available. Fruit flies are not only consumers but also farmers.

Second, the presence in close proximity of individual organisms that are genetically different can increase the growth rate of a given type, presumably since they exude growth-promoting substances into the soil. If a rice plant of a particular type is planted so that it is surrounded by rice plants of a different type, it will give a higher yield than if surrounded by its own type. This phenomenon, known for more than a half-century, is the basis of a common practice of mixed-variety rice cultivation in China, and mixed-crop planting has become a method used by practitioners of organic agriculture.

Despite the evidence that organisms do not simply use resources present in the environment but, through their life activities, produce such resources and manufacture their environments, the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials. As a coauthor of a widely used college textbook of genetics,(1) I have had to engage in a constant struggle with my coauthors over the course of thirty years in order to introduce students to the notion that the relative reproductive fitness of organisms with different genetic makeups may be sensitive to their frequency in the population.

But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.

It has been an agony of biology to find boundaries between parts of organisms that are appropriate for an understanding of particular questions. We murder to dissect. The realization of the complex functional interactions and feedbacks that occur between different metabolic pathways has been a slow and difficult process. We do not have simply an “endocrine system” and a “nervous system” and a “circulatory system,” but “neurosecretory” and “neurocirculatory” systems that become the objects of inquiry because of strong forces connecting them. We may indeed stir a flower without troubling a star, but we cannot stir up a hornet’s nest without troubling our hormones. One of the ironies of language is that we use the term “organic” to imply a complex functional feedback and interaction of parts characteristic of living “organisms.” But musical organs, from which the word was adopted, have none of the complex feedback interactions that organisms possess. Indeed the most complex musical organ has multiple keyboards, pedal arrays, and a huge array of stops precisely so that different notes with different timbres can be played simultaneously and independently.

Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities. Fox Keller is one of the most sophisticated and intelligent analysts of the social and psychological forces that operate in intellectual life and, in particular, of the relation of gender in our society both to the creation and acceptance of scientific ideas. The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:

If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.(2)

Beginning with her consciousness of the role of gender in influencing the construction of scientific ideas, she has, over the last twenty-five years, considered how language, models, and metaphors have had a determinative role in the construction of scientific explanation in biology.

A major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual.

It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.

In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes. She gives as a simple example the sound of two different drummers playing at a distance from us. If each drummer plays each drum for us, we should be able to tell the effect of different drummers as opposed to differences between drums. But she admits that is only true if the drummers themselves do not change their ways of playing when they change drums.

Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1).

There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.

There is a third source of variation called the “interaction,” the variation that cannot be accounted for simply by the separate average effects of location and variety. There is no difference that appears between the average of different varieties or average of different locations, suggesting that neither location or variety matters to yield. Yet the yields of corn were different when different particular combinations of variety and location are observed. These effects of particular combinations of factors, not accounted for by the average effects of each factor separately, are thrown into an unanalyzed category called “interaction” with no concrete physical model made explicit.

In real life there will be some difference between the varieties when averaged over locations and some variation between locations when averaged over varieties; but there will also be some interaction variation accounting for the failure of the separately identified main effects to add up to the total variation. In an extreme case, as for example our jungle drummers with a common consciousness of what drums should sound like, it may turn out to be all interaction.

The Mirage of a Space Between Nature and Nurture appears in an era when biological—and specifically, genetic—causation is taken as the preferred explanation for all human physical differences. Although the early and mid-twentieth century was a period of immense popularity of genetic explanations for class and race differences in mental ability and temperament, especially among social scientists, such theories have now virtually disappeared from public view, largely as a result of a considerable effort of biologists to explain the errors of those claims.

The genes for IQ have never been found. Ironically, at the same time that genetics has ceased to be a popular explanation for human intellectual and temperamental differences, genetic theories for the causation of virtually every physical disorder have become the mode. “DNA” has replaced “IQ” as the abbreviation of social import. The announcement in February 2001 that two groups of investigators had sequenced the entire human genome was taken as the beginning of a new era in medicine, an era in which all diseases would be treated and cured by the replacement of faulty DNA. William Haseltine, the chairman of the board of the private company Human Genome Sciences, which participated in the genome project, assured us that “death is a series of preventable diseases.” Immortality, it appeared, was around the corner. For nearly ten years announcements of yet more genetic differences between diseased and healthy individuals were a regular occurrence in the pages of The New York Times and in leading general scientific publications like Science and Nature.

Then, on April 15, 2009, there appeared in The New York Times an article by the influential science reporter and fan of DNA research Nicholas Wade, under the headline “Study of Genes and Diseases at an Impasse.” In the same week the journal Science reported that DNA studies of disease causation had a “relatively low impact.” Both of these articles were instigated by several articles in The New England Journal of Medicine, which had come to the conclusion that the search for genes underlying common causes of mortality had so far yielded virtually nothing useful. The failure to find such genes continues and it seems likely that the search for the genes causing most common diseases will go the way of the search for the genes for IQ.

A major problem in understanding what geneticists have found out about the relation between genes and manifest characteristics of organisms is an overly flexible use of language that creates ambiguities of meaning. In particular, their use of the terms “heritable” and “heritability” is so confusing that an attempt at its clarification occupies the last two chapters of The Mirage of a Space Between Nature and Nurture. When a biological characteristic is said to be “heritable,” it means that it is capable of being transmitted from parents to offspring, just as money may be inherited, although neither is inevitable. In contrast, “heritability” is a statistical concept, the proportion of variation of a characteristic in a population that is attributable to genetic variation among individuals. The implication of “heritability” is that some proportion of the next generation will possess it.

The move from “heritable” to “heritability” is a switch from a qualitative property at the level of an individual to a statistical characterization of a population. Of course, to have a nonzero heritability in a population, a trait must be heritable at the individual level. But it is important to note that even a trait that is perfectly heritable at the individual level might have essentially zero heritability at the population level. If I possess a unique genetic variant that enables me with no effort at all to perform a task that many other people have learned to do only after great effort, then that ability is heritable in me and may possibly be passed on to my children, but it may also be of zero heritability in the population.

One of the problems of exploring an intellectual discipline from the outside is that the importance of certain basic methodological considerations is not always apparent to the observer, considerations that mold the entire intellectual structure that characterizes the field. So, in her first chapter, “Nature and Nurture as Alternatives,” Fox Keller writes that “my concern is with the tendency to think of nature and nurture as separable and hence as comparable, as forces to which relative strength can be assigned.” That concern is entirely appropriate for an external critic, and especially one who, like Fox Keller, comes from theoretical physics rather than experimental biology. Experimental geneticists, however, find environmental effects a serious distraction from the study of genetic and molecular mechanisms that are at the center of their interest, so they do their best to work with cases in which environmental effects are at a minimum or in which those effects can be manipulated at will. If the machine model of organisms that underlies our entire approach to the study of biology is to work for us, we must restrict our objects of study to those in which we can observe and manipulate all the gears and levers.

For much of the history of experimental genetics the chief organism of study was the fruit fly, Drosophila melanogaster, in which very large numbers of different gene mutations with visible effects on the form and behavior of the flies had been discovered. The catalog of these mutations described, in addition to genetic information, a description of the way in which mutant flies differed from normal (“wild type”) and assigned each mutation a “Rank” between 1 and 4. Rank 1 mutations were the most reliable for genetic study because every individual with the mutant genetic type could be easily and reliably recognized by the observer, whereas some proportion of individuals carrying mutations of other ranks could be indistinguishable from normal, depending on the environmental conditions in which they developed. Geneticists, if they could, avoided depending on poorer-rank mutations for their experiments. Only about 20 percent of known mutations were of Rank 1.

With the recent shift from the study of classical genes in controlled breeding experiments to the sequencing of DNA as the standard method of genetic study, the situation has gotten much worse. On the one hand, about 99 percent of the DNA in a cell is of completely unknown functional significance and any two unrelated individuals will differ from each other at large numbers of DNA positions. On the other hand, the attempt to assign the causes of particular diseases and metabolic malfunctions in humans to specific mutations has been a failure, with the exception of a few classical cases like sickle-cell anemia. The study of genes for specific diseases has indeed been of limited value. The reason for that limited value is in the very nature of genetics as a way of studying organisms.

Genetics, from its very beginning, has been a “subtractive” science. That is, it is based on the analysis of the difference between natural or “wild-type” organisms and those with some genetic defect that may interfere in some observable way with regular function. But to carry out such comparison it is necessary that the organisms being studied are, to the extent possible, identical in all other respects, and that the comparison is carried out in an environment that does not, itself, generate atypical responses yet allows the possible effect of the genetic perturbation to be observed. We must face the possibility that such a subtractive approach will never be able to reveal the way in which nature and nurture interact in normal circumstances.

An alternative to the standard subtractive method of genetic perturbations would be a synthetic approach in which living systems would be constructed ab initio from their molecular elements. It is now clear that most of the DNA in an organism is not contained in genes in the usual sense. That is, 98–99 percent of the DNA is not a code for a sequence of amino acids that will be assembled into long chains that will fold up to become the proteins that are essential to the formation of organisms; yet that nongenic DNA is transmitted faithfully from generation to generation just like the genic DNA.

It appears that the sequence of this nongenic DNA, which used to be called “junk-DNA,” is concerned with regulating how often, when, and in which cells the DNA of genes is read in order to produce the long strings of amino acids that will be folded into proteins and which of the many alternative possible foldings will occur. As the understanding and possibility of control of the synthesis of the bits and pieces of living cells become more complete, the temptation to create living systems from elementary bits and pieces will become greater and greater. Molecular biologists, already intoxicated with their ability to manipulate life at its molecular roots, are driven by the ambition to create it. The enterprise of “Synthetic Biology” is already in existence.

In May 2010 the consortium originally created by J. Craig Venter to sequence the human genome gave birth to a new organization, Synthetic Genomics, which announced that it had created an organism by implanting a synthetic genome in a bacterial cell whose own original genome had been removed. The cell then proceeded to carry out the functions of a living organism, including reproduction. One may argue that the hardest work, putting together all the rest of the cell from bits and pieces, is still to be done before it can be said that life has been manufactured, but even Victor Frankenstein started with a dead body. We all know what the consequences of that may be.

1. Anthony J.F. Griffiths, Susan R. Wessler, Sean B. Carroll, and Richard C. Lewontin, Introduction to Genetic Analysis , ninth edition (W.H. Freeman, 2008).

2. The Scientist , Vol. 5, No. 1 (January 7, 1991