Arquivo da tag: Semiótica

Why Are Spy Researchers Building a ‘Metaphor Program’? (The Atlantic)

MAY 25 2011, 4:19 PM ET

ALEXIS MADRIGAL – Alexis Madrigal is a senior editor at The Atlantic. He’s the author of Powering the Dream: The History and Promise of Green Technology.
A small research arm of the U.S. government’s intelligence establishment wants to understand how speakers of Farsi, Russian, English, and Spanish see the world by building software that automatically evaluates their use of metaphors.That’s right, metaphors, like Shakespeare’s famous line, “All the world’s a stage,” or more subtly, “The darkness pressed in on all sides.” Every speaker in every language in the world uses them effortlessly, and the Intelligence Advanced Research Projects Activity wants know how what we say reflects our worldviews. They call it The Metaphor Program, and it is a unique effort within the government to probe how a people’s language reveals their mindset.

“The Metaphor Program will exploit the fact that metaphors are pervasive in everyday talk and reveal the underlying beliefs and worldviews of members of a culture,” declared an open solicitation for researchers released last week. A spokesperson for IARPA declined to comment at the time.

diagram.jpg
IARPA wants some computer scientists with experience in processing language in big chunks to come up with methods of pulling out a culture’s relationship with particular concepts.”They really are trying to get at what people think using how they talk,” Benjamin Bergen, a cognitive scientist at the University of California, San Diego, told me. Bergen is one of a dozen or so lead researchers who are expected to vie for a research grant that could be worth tens of millions of dollars over five years, if the team scan show progress towards automatically tagging and processing metaphors across languages.

“IARPA grants are big,” said Jennifer Carter of Applied Research Associates, a 1,600-strong research company that may throw its hat in the Metaphor ring after winning a lead research spot in a separate IARPA solicitation. While no one knows the precise value of the rewards of the IARPA grants and the contracts are believed to vary widely, they tend to support several large teams of multidisciplinary researchers, Carter said. The awards, which would initially go to several teams, could range into the five digits annually. “Generally what happens… there will be a ‘downselect’ each year, so maybe only one team will get money for the whole program,” she said.*

All this to say: The Metaphor Program may represent a nine-figure investment by the government in understanding how people use language. But that’s because metaphor studies aren’t light or frilly and IARPA isn’t afraid of taking on unusual sounding projects if they think they might help intelligence analysts sort through and decode the tremendous amounts of data pouring into their minds.

In a presentation to prospective research “performers,” as they’re known, The Metaphor Program’s manager, Heather McCallum-Bayliss gave the following example of the power of metaphors in political discussions. Her slide reads:

Metaphors shape how people think about complex topics and can influence beliefs. A study presented participants with a report on crime in a city; they were asked how crime should be addressed in the city. The report contained statistics, including crime and murder rates, as well as one of two metaphors, CRIME AS A WILD BEAST or CRIME AS A VIRUS. The participants were influenced by the embedded metaphor…

McCallum-Bayliss appears to be referring to a 2011 paper published in the PLoS ONE, “Metaphors We Think With: The Role of Metaphor in Reasoning,” lead authored by Stanford’s Paul Thibodeau. In that case, if people were given the crime-as-a-virus framing, they were more likely to suggest social reform and less likely to suggest more law enforcement or harsher punishments for criminals. The differences generated by the metaphor alternatives were “were larger than those that exist between Democrats and Republicans, or between men and women,” the study authors noted.

Every writer (and reader) knows that there are clues to how people think and ways to influence each other through our use of words. Metaphor researchers, of whom there are a surprising number and variety, have formalized many of these intuitions into whole branches of cognitive linguistics using studies like the one outlined above (more on that later). But what IARPA’s project calls for is the deployment of spy resources against an entire language. Where you or I might parse a sentence, this project wants to parse, say, all the pages in Farsi on the Internet looking for hidden levers into the consciousness of a people.

“The study of language offers a strategic opportunity for improved counterterrorist intelligence, in that it enables the possibility of understanding of the Other’s perceptions and motivations, be he friend or foe,” the two authors of Computational Methods for Counterterrorism wrote. “As we have seen, linguistic expressions have levels of meaning beyond the literal, which it is critical to address. This is true especially when dealing with texts from a high-context traditionalist culture such as those of Islamic terrorists and insurgents.”

In the first phase of the IARPA program, the researchers would simply try to map from the metaphors a language used to the general affect associated with a concept like “journey” or “struggle.” These metaphors would then be stored in the metaphor repository. In a later stage, the Metaphor Program scientists will be expected to help answer questions like, “What are the perspectives of Pakistan and India with respect to Kashmir?” by using their metaphorical probes into the cultures. Perhaps, a slide from IARPA suggests, metaphors can tell us something about the way Indians and Pakistanis view the role of Britain or the concept of the “nation” or “government.”

The assumption is that common turns of phrase, dissected and reassembled through cognitive linguistics, could say something about the views of those citizens that they might not be able to say themselves. The language of a culture as reflected in a bunch of text on the Internet might hide secrets about the way people think that are so valuable that spies are willing to pay for them.

MORE THAN WORDS

IARPA is modeled on the famed DARPA — progenitors of the Internet among other wonders — and tasked with doing high-risk, high-reward research for the many agencies, the NSA and CIA among them, that make up the American intelligence-gathering force. IARPA is, as you might expect, a low-profile organization. Little information is available from the organization aside from a couple of interviews that its administrator, Lisa Porter, a former NASA official, gave back in 2008 to Wiredand IEEE Spectrum. Neither publication can avoid joking that the agency is like James Bond’s famous research crew, but it turns out that the place is more likely to use “cloak-and-dagger” in a sentence than in actual combat with supervillainy.

A major component of the agency’s work is data mining and analysis. IARPA is split into three program offices with distinct goals: Smart Collection “to dramatically improve the value of collected data from all sources”; Incisive Analysis “to maximize insight from the information we collect, in a timely fashion”; and Safe & Secure Operations “to counter new capabilities implemented by our adversaries that would threaten our ability to operate freely and effectively in a networked world.” The Metaphor Program falls under the office of Incisive Analysis and is headed by the aforementioned McCallum-Bayliss, a former technologist at Lockheed Martin and IBM, who co-filed several patents relating to the processing of names in databases.

Incisive Analysis has put out several calls for other projects. They range widely in scope and domain. The Babel Program seeks to “demonstrate the ability to generate a speech transcription system for any new language within one week to support keyword search performance for effective triage of massive amounts of speech recorded in challenging real-world situations.” ALADDIN aims to create software to automatically monitor massive amounts of video. The FUSE Program is trying to “develop automated methods that aid in the systematic, continuous, and comprehensive assessment of technical emergence” using the scientific and patent literature.

All three projects are technologically exciting, but none of those projects has the poetic ring nor the smell of humanity of The Metaphor Program. The Metaphor Program wants to understand what human beings mean through the unvoiced emotional inflection of our words. That’s normally the work of an examined life, not a piece of spy software.

There is some precedent for the work. It comes from two directions: cognitive linguistics and natural language processing. On the cognitive linguistic side, George Lakoff and Mark Johnson of the University of California, Berkeley did the foundational work, notably in their 1980 book,Metaphors We Live By. As summarized recently by Zoltán Kövecses in his book, Metaphor: A Practical Introduction, Lakoff and Johnson showed that metaphors weren’t just the devices of writers but rather “a valuable cognitive tool without which neither poets nor you and I as ordinary people could live.”

In this school of cognitive linguistics, we need to use more embodied, concrete domains in order to describe more abstract ones. Researchers assembled the linguistic expressions we use like “That class gave me food for thought” and “His idea was half-baked” into a construct called a “conceptual category.” These come in the form of awesomely simple sentences like “Ideas Are Food.” And there are whole great lists of them. (My favorites: Darkness Is a Solid; Time Is Something Moving Toward You; Happiness Is Fluid In a Container; Control Is Up.) The conceptual categories show that humans use one domain (“the source”) to describe another (“the target”). So, take Ideas Are Food: thinking is preparing food and understanding is digestion and believing is swallowing and learning is eating and communicating is feeding. Put simply: We import the logic of the source domain into the target domain.

Below, you can check out how one, “Ideas Are Food,” is expressed, or skip past the gallery to the rest of the story.

The main point here is that metaphors, in this sense, aren’t soft or literary in any narrow sense. Rather, they are a deep and fundamental way that humans make sense of the world. And unfortunately for spies who want to filter the Internet to look for dangerous people, computers can’t make much sense out of sentences like, “We can make beautiful music together,” which Google translates as something about actually playing music when, of course, it really means, “We can be good together.” (Or as the conceptual category would phrase it: “Interpersonal Harmony Is Musical Harmony.”)

While some of the underlying structures of the metaphors — the conceptual categories — are near universal (e.g. Happy Is Up), there are many variations in their range, elaboration, and emphasis. And, of course, not every category is universal. For example, Kövecses points to a special conceptual category in Japanese centered around the hara, or belly, “Anger Is (In The) Hara.” In Zulu, one finds an important category, “Anger Is (Understood As Being) In the Heart,” which would be rare in English. Alternatively, while many cultures conceive of anger as a hot fluid in a container, it’s in English that we “blow off steam,” a turn of phrase that wouldn’t make sense in Zulu.

These relationships have been painstakingly mapped by human analysts over the last 30 years and they represent a deep culturolinguistic knowledge base. For the cognitive linguistic school, all of these uses of language reveal something about the way the people of a culture understand each other and the world. And that’s really the target of the metaphor program, and what makes it unprecedented. They’re after a deeper understanding of the way people use words because the deep patterns encoded in language may help intelligence analysts understand the people, not just the texts.

For Lakoff, it’s about time that the government started taking metaphor seriously. “There have been 30 years of neglect of current linguistics in all government-sponsored research,” he told me. “And finally there is somebody in the government who has managed to do something after many years of trying.”

UC San Diego’s Bergen agreed. “It’s a totally unique project,” he said. “I’ve never seen anything like it.”

But that doesn’t mean it’s going to be easy to create a system that can automatically deduce what Americans’ biases about education from a statement like “The teacher spoon-fed the students.”

Lakoff contends that it will take a long, sustained effort by IARPA (or anyone else) to complete the task. “The quick-and-dirty way” won’t work, he said. “Are they going to do a serious scientific account?”

BUILDING A METAPHOR MACHINE

The metaphor problem is particularly difficult because we don’t even know what the right answers to our queries are, Bergen said.

“If you think about other sorts of automation of language processing, there are right answers,” he said. “In speech recognition, you know what the word should be. So you can do statistical learning. You use humans, tag up a corpus and then run some machine learning algorithms on that. Unfortunately, here, we don’t know what the right answers are.”

For one, we don’t really have a stable way of telling what is and what is not metaphorical language. And metaphorical language is changing all the time. Parsing text for metaphors is tough work for humans and we’re made for it. The kind of intensive linguistic analysis that’s made Lakoff and his students (of whom Bergen was one) famous can take a human two hours for every 500 words on the page.

But it’s that very difficulty that makes people want to deploy computing resources instead of human beings. And they do have some directions that they could take. James Martin of the University of Colorado played a key role in the late 1980s and early 1990s in defining the problem and suggesting a solution. Martin contended “the interpretation of novel metaphors can be accomplished through the systematic extension, elaboration, and combination of knowledge about already well-understood metaphors,” in a 1988 paper.

What that means is that within a given domain — say, “the family” in Arabic — you can start to process text around that. First you’ll have humans go in and tag up the data, finding the metaphors. Then, you’d use what they learned about the target domain “family” to look for metaphorical words that are often associated with it. Then, you run permutations on those words from the source domain to find other metaphors you might not have before. Eventually you build up a repository of metaphors in Arabic around the domain of family.

Of course, that’s not exactly what IARPA’s looking for, but it’s where the research teams will be starting. To get better results, they will have to start to learn a lot more about the relationships between the words in the metaphors. For Lakoff, that means understanding the frames and logics that inform metaphors and structure our thinking as we use them. For Bergen, it means refining the rules by which software can process language. There are three levels of analysis that would then be combined. First, you could know something about the metaphorical bias of an individual word. Crossroads, for example, is generally used in metaphorical terms. Second, words in close proximity might generate a bias, too. “Knockout in the same clause as ‘she’ has a much higher probability of being metaphorical if it’s in close proximity to ‘he,'” Bergen offered as an example. Third, for certain topics, certain words become more active for metaphorical usage. The economy’s movement, for example, probably maps to a source domain of motion through space. So, accelerate to describe something about the economy is probably metaphorical. Create a statistical model to combine the outputs of those three processes and you’ve got a brute-force method for identifying metaphors in a text.

In this particular competition, there will be more nuanced approaches based on parsing the more general relationships between words in text: sorting out which are nouns and how they connect to verbs, etc. “If you have that information, then you can find parts of sentences that don’t look like they should be there,” Bergen explained. A classic kind of identifier would be a type mismatch. “If I am the verb ‘smile,’ I like to have a subject that has a face,” he said. If something without a face is smiling, it might be an indication that some kind of figurative language is being employed.

From these constituent parts — and whatever other wild stuff people cook up —  the teams will try to build a metaphor machine that can convert a language into underlying truths about a culture. Feed text in one end and wait on the other end of the Rube Goldberg software for a series of beliefs about family or America or power.

We might never be able to build such a thing. Indeed, I get the feeling that we can’t, at least not yet. But what if we can?

“Are they going to use it wisely?” Lakoff posed. “Because using it to detect terrorists is not a bad idea, but then the question is: Are they going to use it to spy on us?”

I don’t know, but I know that as an American I think through these metaphors: Problem Is a Target; Society Is a Body; Control Is Up.

* This section of the story was updated to more accurately reflect the intent of Carter’s statement.

Kari Norgaard on climate change denial

Understanding the climate ostrich

BBC News, 15 November 07
By Kari Marie Norgaard
Whitman College, US

Why do people find it hard to accept the increasingly firm messages that climate change is a real and significant threat to livelihoods? Here, a sociologist unravels some of the issues that may lie behind climate scepticism.

“I spent a year doing interviews and ethnographic fieldwork in a rural Norwegian community recently.

In winter, the signs of climate change were everywhere – glaringly apparent in an unfrozen lake, the first ever use of artificial snow at the ski area, and thousands of dollars in lost tourist revenues.

Yet as a political issue, global warming was invisible.

The people I spoke with expressed feelings of deep concern and caring, and a significant degree of ambivalence about the issue of global warming.

This was a paradox. How could the possibility of climate change be both deeply disturbing and almost completely invisible – simultaneously unimaginable and common knowledge?

Self-protection
People told me many reasons why it was difficult to think about this issue. In the words of one man, who held his hands in front of his eyes as he spoke, “people want to protect themselves a bit.”

Community members described fears about the severity of the situation, of not knowing what to do, fears that their way of life was in question, and concern that the government would not adequately handle the problem.

They described feelings of guilt for their own actions, and the difficulty of discussing the issue of climate change with their children.

In some sense, not wanting to know was connected to not knowing how to know. Talking about global warming went against conversation norms.

It wasn’t a topic that people were able to speak about with ease – rather, overall it was an area of confusion and uncertainty. Yet feeling this confusion and uncertainty went against emotional norms of toughness and maintaining control.

Other community members described this sense of knowing and not knowing, of having information but not thinking about it in their everyday lives.

As one young woman told me: “In the everyday I don’t think so much about it, but I know that environmental protection is very important.”

Security risk
The majority of us are now familiar with the basics of climate change.

Worst case scenarios threaten the very basics of our social, political and economic infrastructure.

Yet there has been less response to this environmental problem than any other. Here in the US it seems that only now are we beginning to take it seriously.

How can this be? Why have so few of us engaged in any of the range of possible actions from reducing our airline travel, pressurising our governments and industries to cut emissions, or even talking about it with our family and friends in more than a passing manner?

Indeed, why would we want to know this information?

Why would we want to believe that scenarios of melting Arctic ice and spreading diseases that appear to spell ecological and social demise are in store for us; or even worse, that we see such effects already?

Information about climate change is deeply disturbing. It threatens our sense of individual identity and our trust in our government’s ability to respond.

At the deepest level, large scale environmental problems such as global warming threaten people’s sense of the continuity of life – what sociologist Anthony Giddens calls ontological security.

Thinking about global warming is also difficult for those of us in the developed world because it raises feelings of guilt. We are now aware of how driving automobiles and flying to exotic warm vacations contributes to the problem, and we feel guilty about it.

Tactful denial
If being aware of climate change is an uncomfortable condition which people are motivated to avoid, what happens next?

After all, ignoring the obvious can take a lot of work.

In the Norwegian community where I worked, collectively holding information about global warming at arm’s length took place by participating in cultural norms of attention, emotion, and conversation, and by using a series of cultural narratives to deflect disturbing information and normalise a particular version of reality in which “everything is fine.”

When what a person feels is different from what they want to feel, or are supposed to feel, they usually engage in what sociologists call emotional management.

We have a whole repertoire of techniques or “tools” for ignoring this and other disturbing problems.

As sociologist Evitiar Zerubavel makes clear in his work on the social organisation of denial and secrecy, the means by which we manage to ignore the disturbing realities in front of us are also collectively shaped.

How we cope, how we respond, or how we fail to respond are social as well.

Social rules of focusing our attention include rules of etiquette that involve tact-related ethical obligations to “look the other way” and ignore things we most likely would have noticed about others around us.

Indeed, in many cases, merely following our cultural norms of acceptable conversation and emotional expression serves to keep our attention safely away from that pesky topic of climate change.

Emotions of fear and helplessness can be managed through the use of selective attention; controlling one’s exposure to information, not thinking too far into the future and focusing on something that could be done.

Selective attention can be used to decide what to think about or not to think about, for example screening out painful information about problems for which one does not have solutions: “I don’t really know what to do, so I just don’t think about that”.

The most effective way of managing unpleasant emotions such as fear about your children seems to be by turning our attention to something else, or by focusing attention onto something positive.

Hoodwinking ourselves?
Until recently, the dominant explanation within my field of environmental sociology for why people failed to confront climate change was that they were too poorly informed.

Others pose that Americans are simply too greedy or too individualistic, or suffer from incorrect mental models.

Psychologists have described “faulty” decision-making powers such as “confirmation bias”, and argue that with more appropriate analogies we will be able to manage the information and respond.

Political economists, on the other hand, tell us that we’ve been hoodwinked by increased corporate control of media that limits and moulds available information about global warming.

These are clearly important answers.

Yet the fact that nobody wants information about climate change to be true is a critical piece of the puzzle that also happens to fit perfectly with the agenda of those who have tried to generate climate scepticism.”

Dr Kari Marie Norgaard is a sociologist at Whitman College in Walla Walla, Washington state, US.

See also A Dialog Between Renee Lertzman and Kari Norgaard.

It’s Even Less in Your Genes (The New York Review of Books)

MAY 26, 2011
Richard C. Lewontin

The Mirage of a Space Between Nature and Nurture
by Evelyn Fox Keller
Duke University Press, 107 pp., $64.95; $18.95 (paper)

In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.

That tendency, as Evelyn Fox Keller’s new book suggests, is one of the most powerful influences on our scientific understanding. As we change our intent, also we identify anew what is object and what is background. When I glance out the window as I write these lines I notice my neighbor’s car, its size, its shape, its color, and I note that it is parked in a snow bank. My interest then changes to the results of the recent storm and it is the snow that becomes my object of attention with the car relegated to the background of shapes embedded in the snow. What is an object as opposed to background is a mental construct and requires the identification of clear boundaries. As one of my children’s favorite songs reminded them:

You gotta have skin.
All you really need is skin.
Skin’s the thing that if you’ve got it outside,
It helps keep your insides in.
Organisms have skin, but their total environments do not. It is by no means clear how to delineate the effective environment of an organism.

One of the complications is that the effective environment is defined by the life activities of the organism itself. “Fish gotta swim and birds gotta fly,” as we are reminded by yet another popular lyric. Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.

The vulgarization of Darwinism that sees the “struggle for existence” as nothing but the competition for some environmental resource in short supply ignores the large body of evidence about the actual complexity of the relationship between organisms and their resources. First, despite the standard models created by ecologists in which survivorship decreases with increasing population density, the survival of individuals in a population is often greatest not when their “competitors” are at their lowest density but at an intermediate one. That is because organisms are involved not only in the consumption of resources, but in their creation as well. For example, in fruit flies, which live on yeast, the worm-like immature stages of the fly tunnel into rotting fruit, creating more surface on which the yeast can grow, so that, up to a point, the more larvae, the greater the amount of food available. Fruit flies are not only consumers but also farmers.

Second, the presence in close proximity of individual organisms that are genetically different can increase the growth rate of a given type, presumably since they exude growth-promoting substances into the soil. If a rice plant of a particular type is planted so that it is surrounded by rice plants of a different type, it will give a higher yield than if surrounded by its own type. This phenomenon, known for more than a half-century, is the basis of a common practice of mixed-variety rice cultivation in China, and mixed-crop planting has become a method used by practitioners of organic agriculture.

Despite the evidence that organisms do not simply use resources present in the environment but, through their life activities, produce such resources and manufacture their environments, the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials. As a coauthor of a widely used college textbook of genetics,(1) I have had to engage in a constant struggle with my coauthors over the course of thirty years in order to introduce students to the notion that the relative reproductive fitness of organisms with different genetic makeups may be sensitive to their frequency in the population.

But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.

It has been an agony of biology to find boundaries between parts of organisms that are appropriate for an understanding of particular questions. We murder to dissect. The realization of the complex functional interactions and feedbacks that occur between different metabolic pathways has been a slow and difficult process. We do not have simply an “endocrine system” and a “nervous system” and a “circulatory system,” but “neurosecretory” and “neurocirculatory” systems that become the objects of inquiry because of strong forces connecting them. We may indeed stir a flower without troubling a star, but we cannot stir up a hornet’s nest without troubling our hormones. One of the ironies of language is that we use the term “organic” to imply a complex functional feedback and interaction of parts characteristic of living “organisms.” But musical organs, from which the word was adopted, have none of the complex feedback interactions that organisms possess. Indeed the most complex musical organ has multiple keyboards, pedal arrays, and a huge array of stops precisely so that different notes with different timbres can be played simultaneously and independently.

Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities. Fox Keller is one of the most sophisticated and intelligent analysts of the social and psychological forces that operate in intellectual life and, in particular, of the relation of gender in our society both to the creation and acceptance of scientific ideas. The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:

If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.(2)

Beginning with her consciousness of the role of gender in influencing the construction of scientific ideas, she has, over the last twenty-five years, considered how language, models, and metaphors have had a determinative role in the construction of scientific explanation in biology.

A major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual.

It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.

In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes. She gives as a simple example the sound of two different drummers playing at a distance from us. If each drummer plays each drum for us, we should be able to tell the effect of different drummers as opposed to differences between drums. But she admits that is only true if the drummers themselves do not change their ways of playing when they change drums.

Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1).

There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.

There is a third source of variation called the “interaction,” the variation that cannot be accounted for simply by the separate average effects of location and variety. There is no difference that appears between the average of different varieties or average of different locations, suggesting that neither location or variety matters to yield. Yet the yields of corn were different when different particular combinations of variety and location are observed. These effects of particular combinations of factors, not accounted for by the average effects of each factor separately, are thrown into an unanalyzed category called “interaction” with no concrete physical model made explicit.

In real life there will be some difference between the varieties when averaged over locations and some variation between locations when averaged over varieties; but there will also be some interaction variation accounting for the failure of the separately identified main effects to add up to the total variation. In an extreme case, as for example our jungle drummers with a common consciousness of what drums should sound like, it may turn out to be all interaction.

The Mirage of a Space Between Nature and Nurture appears in an era when biological—and specifically, genetic—causation is taken as the preferred explanation for all human physical differences. Although the early and mid-twentieth century was a period of immense popularity of genetic explanations for class and race differences in mental ability and temperament, especially among social scientists, such theories have now virtually disappeared from public view, largely as a result of a considerable effort of biologists to explain the errors of those claims.

The genes for IQ have never been found. Ironically, at the same time that genetics has ceased to be a popular explanation for human intellectual and temperamental differences, genetic theories for the causation of virtually every physical disorder have become the mode. “DNA” has replaced “IQ” as the abbreviation of social import. The announcement in February 2001 that two groups of investigators had sequenced the entire human genome was taken as the beginning of a new era in medicine, an era in which all diseases would be treated and cured by the replacement of faulty DNA. William Haseltine, the chairman of the board of the private company Human Genome Sciences, which participated in the genome project, assured us that “death is a series of preventable diseases.” Immortality, it appeared, was around the corner. For nearly ten years announcements of yet more genetic differences between diseased and healthy individuals were a regular occurrence in the pages of The New York Times and in leading general scientific publications like Science and Nature.

Then, on April 15, 2009, there appeared in The New York Times an article by the influential science reporter and fan of DNA research Nicholas Wade, under the headline “Study of Genes and Diseases at an Impasse.” In the same week the journal Science reported that DNA studies of disease causation had a “relatively low impact.” Both of these articles were instigated by several articles in The New England Journal of Medicine, which had come to the conclusion that the search for genes underlying common causes of mortality had so far yielded virtually nothing useful. The failure to find such genes continues and it seems likely that the search for the genes causing most common diseases will go the way of the search for the genes for IQ.

A major problem in understanding what geneticists have found out about the relation between genes and manifest characteristics of organisms is an overly flexible use of language that creates ambiguities of meaning. In particular, their use of the terms “heritable” and “heritability” is so confusing that an attempt at its clarification occupies the last two chapters of The Mirage of a Space Between Nature and Nurture. When a biological characteristic is said to be “heritable,” it means that it is capable of being transmitted from parents to offspring, just as money may be inherited, although neither is inevitable. In contrast, “heritability” is a statistical concept, the proportion of variation of a characteristic in a population that is attributable to genetic variation among individuals. The implication of “heritability” is that some proportion of the next generation will possess it.

The move from “heritable” to “heritability” is a switch from a qualitative property at the level of an individual to a statistical characterization of a population. Of course, to have a nonzero heritability in a population, a trait must be heritable at the individual level. But it is important to note that even a trait that is perfectly heritable at the individual level might have essentially zero heritability at the population level. If I possess a unique genetic variant that enables me with no effort at all to perform a task that many other people have learned to do only after great effort, then that ability is heritable in me and may possibly be passed on to my children, but it may also be of zero heritability in the population.

One of the problems of exploring an intellectual discipline from the outside is that the importance of certain basic methodological considerations is not always apparent to the observer, considerations that mold the entire intellectual structure that characterizes the field. So, in her first chapter, “Nature and Nurture as Alternatives,” Fox Keller writes that “my concern is with the tendency to think of nature and nurture as separable and hence as comparable, as forces to which relative strength can be assigned.” That concern is entirely appropriate for an external critic, and especially one who, like Fox Keller, comes from theoretical physics rather than experimental biology. Experimental geneticists, however, find environmental effects a serious distraction from the study of genetic and molecular mechanisms that are at the center of their interest, so they do their best to work with cases in which environmental effects are at a minimum or in which those effects can be manipulated at will. If the machine model of organisms that underlies our entire approach to the study of biology is to work for us, we must restrict our objects of study to those in which we can observe and manipulate all the gears and levers.

For much of the history of experimental genetics the chief organism of study was the fruit fly, Drosophila melanogaster, in which very large numbers of different gene mutations with visible effects on the form and behavior of the flies had been discovered. The catalog of these mutations described, in addition to genetic information, a description of the way in which mutant flies differed from normal (“wild type”) and assigned each mutation a “Rank” between 1 and 4. Rank 1 mutations were the most reliable for genetic study because every individual with the mutant genetic type could be easily and reliably recognized by the observer, whereas some proportion of individuals carrying mutations of other ranks could be indistinguishable from normal, depending on the environmental conditions in which they developed. Geneticists, if they could, avoided depending on poorer-rank mutations for their experiments. Only about 20 percent of known mutations were of Rank 1.

With the recent shift from the study of classical genes in controlled breeding experiments to the sequencing of DNA as the standard method of genetic study, the situation has gotten much worse. On the one hand, about 99 percent of the DNA in a cell is of completely unknown functional significance and any two unrelated individuals will differ from each other at large numbers of DNA positions. On the other hand, the attempt to assign the causes of particular diseases and metabolic malfunctions in humans to specific mutations has been a failure, with the exception of a few classical cases like sickle-cell anemia. The study of genes for specific diseases has indeed been of limited value. The reason for that limited value is in the very nature of genetics as a way of studying organisms.

Genetics, from its very beginning, has been a “subtractive” science. That is, it is based on the analysis of the difference between natural or “wild-type” organisms and those with some genetic defect that may interfere in some observable way with regular function. But to carry out such comparison it is necessary that the organisms being studied are, to the extent possible, identical in all other respects, and that the comparison is carried out in an environment that does not, itself, generate atypical responses yet allows the possible effect of the genetic perturbation to be observed. We must face the possibility that such a subtractive approach will never be able to reveal the way in which nature and nurture interact in normal circumstances.

An alternative to the standard subtractive method of genetic perturbations would be a synthetic approach in which living systems would be constructed ab initio from their molecular elements. It is now clear that most of the DNA in an organism is not contained in genes in the usual sense. That is, 98–99 percent of the DNA is not a code for a sequence of amino acids that will be assembled into long chains that will fold up to become the proteins that are essential to the formation of organisms; yet that nongenic DNA is transmitted faithfully from generation to generation just like the genic DNA.

It appears that the sequence of this nongenic DNA, which used to be called “junk-DNA,” is concerned with regulating how often, when, and in which cells the DNA of genes is read in order to produce the long strings of amino acids that will be folded into proteins and which of the many alternative possible foldings will occur. As the understanding and possibility of control of the synthesis of the bits and pieces of living cells become more complete, the temptation to create living systems from elementary bits and pieces will become greater and greater. Molecular biologists, already intoxicated with their ability to manipulate life at its molecular roots, are driven by the ambition to create it. The enterprise of “Synthetic Biology” is already in existence.

In May 2010 the consortium originally created by J. Craig Venter to sequence the human genome gave birth to a new organization, Synthetic Genomics, which announced that it had created an organism by implanting a synthetic genome in a bacterial cell whose own original genome had been removed. The cell then proceeded to carry out the functions of a living organism, including reproduction. One may argue that the hardest work, putting together all the rest of the cell from bits and pieces, is still to be done before it can be said that life has been manufactured, but even Victor Frankenstein started with a dead body. We all know what the consequences of that may be.

1. Anthony J.F. Griffiths, Susan R. Wessler, Sean B. Carroll, and Richard C. Lewontin, Introduction to Genetic Analysis , ninth edition (W.H. Freeman, 2008).

2. The Scientist , Vol. 5, No. 1 (January 7, 1991

Brasileiros são mais europeus do que se imaginava (O Globo, JC)

JC e-mail 4203, de 18 de Fevereiro de 2011.

As conclusões estão na pesquisa coordenada pelo geneticista Sérgio Danilo Pena, da Universidade Federal de Minas Gerais, e publicada na revista científica “PLoS”

Os brasileiros são bem mais europeus do que africanos. Esqueça todas as análises já feitas com base em conceitos como raça e cor da pele. O primeiro grande estudo a medir a ancestralidade da população do País a partir de sua genética revela uma participação europeia muito maior do que se imaginava preponderante em todo o território, inclusive nas regiões Norte e Nordeste. As conclusões estão na pesquisa coordenada pelo geneticista Sérgio Danilo Pena, da Universidade Federal de Minas Gerais, e publicada na revista científica “PLoS”.

O trabalho revelou que, em todas as regiões, a ancestralidade europeia é dominante, com percentuais que variam de 60,6% no Nordeste a 77,7% no Sul. Mesmo as pessoas que se denominam negras pelos critérios do Instituto Brasileiro de Geografia e Estatística (IBGE) apresentam, na verdade, uma alta ancestralidade branca. Para se ter uma ideia, na Bahia, os negros tem 53,9% de raízes europeias. Na análise dos especialistas envolvidos no trabalho, a “europeização” do Brasil se deu a partir do fim do século 19, com o fim do tráfico negreiro e da escravidão e o início do fluxo migratório de aproximadamente 6 milhões de trabalhadores europeus.

Para além do impacto histórico e antropológico que os resultados do novo estudo podem ter, Sérgio Pena ressalta ainda a sua importância do ponto de vista médico: os tratamentos podem ser mais homogêneos do que se imaginava.

Formada por três diferentes raízes ancestrais, indígena, europeia e africana, a população brasileira sempre se acreditou muito heterogênea. Mas o estudo conclui que, independentemente de eventuais classificações baseadas na cor da pele, os brasileiros são muito homogêneos do ponto de vista de sua ancestralidade.
(Roberta Jansen de O Globo)

[Para uma discussão mais sofisticada da questão, ver: Santos, Ricardo Ventura, Peter H. Fry, Simone Monteiro, Marcos Chor Maio, José Carlos Rodrigues, Luciana Bastos‐Rodrigues, and Sérgio D. J. Pena. Color, Race, and Genomic Ancestry in Brazil: Dialogues between Anthropology and Genetics. Current Anthropology Volume 50, Number 6, 2009, pp. 787‐819.]

Astonishing Photos of One of Earth's Last Uncontacted Tribes (Fox News)

February 01, 2011 | FoxNews.com

 

Tribe members painted with red and black vegetable dye watch a Brazilian government plane overhead.

Gleison Miranda/FUNAI/Survival International

Tribe members painted with red and black vegetable dye watch a Brazilian government plane overhead.

Stunning new photos taken over a jungle in Brazil reveal new images of one of the last uncontacted tribal groups on the planet.

The photos reveal a thriving, healthy community living in Brazil near the Peruvian border, with baskets full of manioc and papaya fresh from their gardens, said Survival International, a rights organization working to preserve tribal communities and organizations worldwide.

Survival International created a stir in 2008, when it released similar images of the same tribal groups — images that sparked widespread allegations that the pictures were a hoax. Peru’s President Garcia has publicly suggested uncontacted tribes have been ‘invented’ by ‘environmentalists’ opposed to oil exploration in the Amazon, while another spokesperson compared them to the Loch Ness monster, the group explains on its site.

Survival International strongly disputes those allegations, however. A spokeswoman for the group told FoxNews.com that the Brazilian government has an entire division dedicated to helping out uncontacted tribes.

“In fact, there are more than one hundred uncontacted tribes around the world,” the group explains.

Peru has yet to make a statement about the newly released pictures, which were taken by Brazil’s Indian Affairs Department, the group said. Survival International is using them as part of its campaign to protect the tribe’s survival — they are in serious jeopardy, the organization argues, due to an influx of illegal loggers invading the Peru side of the border.

Brazilian authorities believe the influx of loggers is pushing isolated Indians from Peru into Brazil, and the two groups are likely to come into conflict. Marcos Apurina, coordinator of Brazil’s Amazon Indian organization COIAB said in a statement that releasing the images was necessary to prove the logging was going on — and to protect the native groups.

“It is necessary to reaffirm that these peoples exist, so we support the use of images that prove these facts. These peoples have had their most fundamental rights, particularly their right to life, ignored … it is therefore crucial that we protect them,” he said.

“The illegal loggers will destroy this tribe,” agreed Survival International’s director Stephen Corry. “It’s vital that the Peruvian government stop them before time runs out. The people in these photos are self-evidently healthy and thriving. What they need from us is their territory protected, so that they can make their own choices about their future.”

Read more: http://www.foxnews.com/scitech/2011/02/01/astonishing-photos-reveal-earths-uncontacted-tribes/#ixzz1DKZgWVgW

 

Biosemiotics: Searching for meanings in a meadow (New Scientist)

23 August 2010 by Liz Else

Are signs and meanings just as vital to living things as enzymes and tissues? Liz Else investigates a science in the making

In your own world, enwrapped in myriad others (Image: WestEnd61/Rex Features)

In your own world, enwrapped in myriad others (Image: WestEnd61/Rex Features)


EVERY so often, something shows up on the New Scientist radar that we just can’t identify easily. Is it a bird? Is it a plane? Is it a brand new type of flying machine that we are going to have to study closely?

That was our reaction when we first heard about a small conference held in June at the philosophy department of the Portuguese Catholic University in Braga. There, a group of biologists, neuroscientists, philosophers, information technologists and other scholars from all over the world gathered to discuss some revolutionary ideas for developing the hitherto obscure field of biosemiotics.

Unlike most revolutionaries, it soon became clear that this group’s goal was not to overturn the established order. They don’t attack the current way of doing science- they see its value plainly- but they do believe that for biology to become a more fully explanatory science, it needs a more encompassing framework. This framework needs to be able to explain an under-studied aspect of all living organisms: the capacity to navigate their environments through the processing of signs.

Biology, of course, already concerns itself with information: cell signalling, the genetic code, pheromones and human language, for example. What biosemiotics aims to do is to weave these disparate strands into a single coherent theory of biological meaning.

At first glance, the group seems to have chosen an unfortunate and incomprehensible name for its activity- semiotics is the study of signs and symbols that is most commonly associated with linguistic philosophers such as Ferdinand de Saussure. “Biosemiotics”, then, might sound like the name of some arcane mix of biological science and linguistic philosophy. Luckily, though, the true message of biosemiotics is clear: we may do better to stop thinking about the biological world solely in terms of its physical and chemical properties, but see it also as a world made up of biological signs and “meanings”.

One of the nascent field’s leading lights, Donald Favareau of the National University of Singapore, provides a definition on the group’s website. “Biosemiotics is the study of the myriad forms of communications… observable both within and between living systems. It is thus the study of representation, meaning, sense, and the biological significance of sign processes- from intracellular signalling processes to animal display behaviour to human… artefacts such as language and abstract symbolic thought.”

To get a better sense of what this means, it is best to go back to the field’s roots. Biosemiotics traces its earliest influences to the independent efforts of an Estonian-born biologist in the early 20th century and an American philosopher of the 19th century, who wrote much of his work hidden in an attic to avoid his creditors.

Estonian-born Jakob von Uexküll was an animal physiologist whose 1934 book A Stroll Through the Worlds of Animals and Men: A picture book of invisible worlds – and later works – inspired Konrad Lorenz and Niko Tinbergen, who then went on to win a Nobel prize in 1973 for their studies in animal behaviour, or ethology.

Von Uexküll wrote: “If we stand before a meadow covered with flowers, full of buzzing bees, fluttering butterflies, darting dragonflies, grasshoppers jumping over blades of grass, mice scurrying, and snails crawling about, we would be inclined to ask ourselves the unintended question: Does the meadow present the same view to the eyes of so many various animals as it does to ours?”

“Does the meadow present the same view to so many animals as it does to ours?”

He thought that a naive person would intuitively answer that it is the same meadow to every eye. Physical scientists, he thought, would see all the animals in the meadow as “mere mechanisms, steered here and there by physical and chemical agents, the meadow consists of a confusion of light waves and air vibrations… which operate the various objects in it”.

For von Uexküll, both views were wrong. Each creature in the meadow lived in “its own world filled with the perceptions which it alone knows”, and it was in accordance with that experiential world – and not the entirety of the whole, unseen but physically existing world – that the creature had to coordinate its actions to eat, flee, mate and sustain itself.

For some animals, that subjective perceptual universe, or Umwelt, as von Uexküll called it, writing in German, is narrow. He describes the umwelt of a tick which sits “motionless on the tip of a branch until a mammal passes below it. The smell of the butyric acid awakens it and it lets itself fall. It lands on the coat of its prey, through which it burrows to reach and pierce the warm skin… The pursuit of this simple meaning rule constitutes almost the whole of the tick’s life.” By reacting only to the single odorant of sweat, the tick reduces the countless characteristics of the world of host animals to a simple common denominator in its own world.

So von Uexküll’s meadow is alive with myriad perceptual worlds, with each one, for each species, evolving within, and functioning as, a different web of meaning. To understand why animals are organised the way they are, and why they act on the world as they do, he explained: “Meaning is the guiding star that biology must follow.”

Von Uexküll’s pioneering sensation-action “feedback-cycle” model for explaining the mechanics of biological meaning was revolutionary for its time. Indeed, it anticipated by many decades the science of cybernetics, which studies systems of control. But his model is now considered too mechanical and simplistic by most biosemioticians. To build what they hope might be a more scientifically fertile model, many of them base their understanding on the semiotic logic of the philosopher Charles Sanders Peirce.

Peirce was born in 1839 in Cambridge, Massachusetts. His father was a professor of mathematics and astronomy at Harvard University. Peirce junior was a brilliant but rebellious student, who suffered from both neuralgia and depression. Known today as the father of the philosophical school of pragmatism, as a student Peirce made the serious mistake of angering his chemistry professor, who went on to become president of Harvard. During a life-long feud, he ensured that Peirce never gained a permanent post at any university.

For the 55 years after he graduated, Peirce wrote scientific and philosophic dictionary and encyclopaedia entries to support himself and his ongoing studies, which included producing the world’s first photometric star catalogue at Harvard Astronomical Observatory and working as a geodesist for the US Coastal Service. It was a difficult life: he was often without heat and food, and was kept alive thanks to the kindness of his brother, neighbours and benefactors, including his closest friend and admirer, the psychologist William James.

Peirce’s work in logic, mathematics and philosophy ran to an astonishing 60,000 pages. Much of this has been discovered and re-examined only recently, giving rise to the vigorous field of Peircean studies. He saw logic as a formal doctrine of signs, and his theory of signs is important in modern biosemiotics.

Most of us naively conceive of a “sign” as standing for something concrete: a red traffic light for most of us simply means “stop”. In other words, the two things – a sign and its meaning – are directly connected in a sign relationship. Peirce, however, saw a sign as representing a relation between three things.

Take the everyday example given by Jesper Hoffmeyer, a biochemist at the University of Copenhagen, Denmark, and a leader in biosemiotics, in his book Signs of Meaning in the Universe. Suppose a child breaks out in a rash of red spots and is taken to the doctor by his mother. For the mother, the spots are a sign that her child is sick. The doctor knows they mean that the child has measles. As Peirce put it in its most general form: “a sign is something which stands to someone, for something, in some respect”. The red spots are not automatically something which is a sign of measles to anyone, but only to “someone”, in this case the doctor.

Piece saw all signs as involving a triadic relation: the sign “vehicle” (the red spots); the “object” to which the sign-bearer refers (measles); and the “interpretant”, the system that allows the realisation of the sign-object relation to take place (the doctor’s thinking) and that acts accordingly upon that relation.

He wanted to investigate and uncover the complex logic by which “in every scientific intelligence, one sign gives birth to another, and especially one thought brings forth another”. His insight was to see that even the simplest sign must be considered as a triadic relation, in which the sign vehicle, object and interpreting system all play ineliminable parts – an insight biosemioticians believe science would do well to explore more fully.

This realisation led Peirce away from devising linear chains of logic that relied on just two factors, to the construction of a “sign” logic that is an endlessly branching, multidimensional network. Although Peirce’s work is theoretical, there are clear parallels between von Uexküll’s model of the meadow, filled with different meanings, interpreted by the different biological systems of different creatures, and Peirce’s model of the sign as ultimately a kind of relation that living agents adopt towards things for the accomplishment of various ends and actions.

When Peirce wrote, he was thinking primarily of signs as relations that enable human thought to effectively understand the world. Accordingly, his logic has recently been applied in efforts to understand the origins of human language that reject the idea that language appeared either as a lucky accident that endowed humans with a universal grammar- as posited by the linguist and philosopher Noam Chomsky – or as a by-product of an enlarged brain.

Instead, researchers such as Terrence Deacon, a biological anthropologist at the University of California, Berkeley, have used Peirce’s sign logic to explain how language may have arisen as an evolutionary consequence of pre-linguistic symbolic activity.

But biosemiotics applies the idea of signs and signalling much more widely than just the analysis of human language. Take these sentences from a recent “Perspectives” article in Science magazine: “Living cells are complex systems that are constantly making decisions in response to internal or external signals. Among the most notable carriers of information are… enzymes that receive inputs from cell surface or internal receptors and determine what actions should be taken in response…” (Science, vol 328, p 983).

The broadest scope

Words like “signals”, “information” and “inputs” litter the biology literature. But all of these usages are metaphorical. What biosemioticians really want is an analysis which goes further, says Charbel El-Hani, a biologist at the Federal University of Bahia in Brazil. “The importance of going beyond metaphor and really building a theory of information is underlined by the reiterated claim that biology is a science of information,” El-Hani told New Scientist.

“What biosemioticians really want is an analysis which goes beyond metaphor”

The scope envisioned for the new field is therefore truly broad: a viewpoint which connects everything from biomolecular networks sending signals that control cell behaviour to animal behaviour and human language. That is the agreed goal, but the scientists and philosophers involved each bring their own uniquely interdisciplinary perspective, and so do not always agree on the best way forward. It is safe to say that this new science is very much in ferment.

To get a feel for this, New Scientist asked a range of thinkers attending the Braga conference to explain how they saw the field. More than 20 responded. The wildly different roads they have travelled to reach biosemiotics, and the different areas to which they wanted to apply it, were evident in their responses.

Favareau came to biosemiotics as a result of “growing discontent with the inability of cognitive neuroscience to explain the reality of experiential ‘meaning’ at the same level that it was so successful in, and manifestly committed to, explaining the mechanics of the electrochemical transmission events by which such meanings are asserted (without explanation) to be produced”.

For Gerard Battail, an information theorist at Télécom ParisTech in France, it is the fact that mainstream biology, while loosely using a vocabulary borrowed from communication theory- “pathways”, “codes” and the like- “remains basically concerned with the flow of matter and energy into and between living entities, failing to recognise [that] the information flow is at least as important”.

Frederik Stjernfelt of Aarhus University in Denmark echoes El-Hani: “Notions such as ‘information’, ‘message’, ‘representation’, ‘code’, ‘signal’, ‘cue’, ‘communication’ and ‘sign’ crop up all over biology,” he says. He points out, however, that while the use of such terms is apparently unavoidable in explaining the workings of living systems, rarely, if ever, are such concepts explicitly defined as technical terms. His version of biosemiotics sees this as an explanatory blind spot that should be taken seriously.

“If not, the danger is that biology is trapped in a dualism where all organic communication, from cells to apes, are claimed to be describable as simple physiochemical causes only- while, on the other hand, full intentional meaning is a specifically human privilege. How could such a thing have developed phylogenetically, if not from simpler semiotic processes in biology?” asks Stjernfelt.

Kalevi Kull at the University of Tartu in Estonia stays closer to von Uexküll. “Biology has studied how organisms and living communities are built. But it is no less important to understand what such living systems know, in a broad sense; that is, what they remember (what agent-object sign relations are biologically preserved), what they recognise (what distinctions they are capable and not capable of), what signs they explore (how they communicate, make meanings and use signs) and so on. These questions are all about how different living systems perceive the world, how they model the world, what experience motivates what actions, based on those perceptions.”

These answers and many more are just a taste of how biosemiotics is shaping up. As Favareau explains, we must remember that it is still a “proto-science- closer to a very lively debate between scientists about what such a future science will have to explain about biological meaning, and how it will do so, than it is to a fully realised science with a common terminology and a settled methodology”.

The founders are open to new ideas. “If one truly recognises the need for something like biosemiotics, then one owes it to science to apply one’s best thought and effort to the task,” writes Favareau in the introduction to a recently released anthology Essential Readings in Biosemiotics (Springer, 2009).

Marcello Barbieri, a molecular biologist at the University of Ferrara in Italy, another key figure, echoes Favareau. He brings yet another perspective to the field – a “code model” that he has applied to the genetic code, splicing and other cellular codes. “Nothing is settled yet in biosemiotics,” he says. “Everything is on the move, and the exploration of the scientifically new continent of ‘meaning’ has just begun.” Watch this space.

“The exploration of the scientifically new continent of ‘meaning’ has just begun”

Bibliography

To learn more about biosemiotics and its history, download a free pdf of the first chapter of Donald Favareau’s Essential Readings in Biosemiotics at www.bit.ly/axHqMO, courtesy of Springer Science publishers and Donald Favarea.