Arquivo da tag: Linguagem

Before Babel? Ancient Mother Tongue Reconstructed (Live Science)

Tia Ghose, LiveScience Staff Writer

06 May 2013, 03:00 PM ET

an old oil painting of the Tower of Babel.The idea of a universal human language goes back at least to the Bible, in which humanity spoke a common tongue, but were punished with mutual unintelligibility after trying to build the Tower of Babel all the way to heaven. Now scientists have reconstructed words from such a language. CREDIT: Pieter Brueghel the Elder (1526/1530–1569) 

The ancestors of people from across Europe and Asia may have spoken a common language about 15,000 years ago, new research suggests.

Now, researchers have reconstructed words, such as “mother,” “to pull” and “man,” which would have been spoken by ancient hunter-gatherers, possibly in an area such as the Caucusus. The word list, detailed today (May 6) in the journal Proceedings of the National Academy of Sciences, could help researchers retrace the history of ancient migrations and contacts between prehistoric cultures.

“We can trace echoes of language back 15,000 years to a time that corresponds to about the end of the last ice age,” said study co-author Mark Pagel, an evolutionary biologist at the University of Reading in the United Kingdom.

Tower of Babel

The idea of a universal human language goes back at least to the Bible, in which humanity spoke a common tongue, but were punished with mutual unintelligibility after trying to build the Tower of Babel all the way to heaven. [Image Gallery: Ancient Middle-Eastern Texts]

But not all linguists believe in a single common origin of language, and trying to reconstruct that language seemed impossible. Most researchers thought they could only trace a language’s roots back 3,000 to 4,000 years. (Even so, researchers recently said they had traced the roots of a common mother tongue to many Eurasian languages back 8,000 to 9,500 years to Anatolia, a southwestern Asian peninsula that is now part of Turkey.)

Pagel, however, wondered whether language evolution proceeds much like biological evolution. If so, the most critical words, such as the frequently used words that define our social relationships, would change much more slowly.

To find out if he could uncover those ancient words, Pagel and his colleagues in a previous study tracked how quickly words changed in modern languages. They identified the most stable words. They also mapped out how different modern languages were related.

They then reconstructed ancient words based on the frequency at which certain sounds tend to change in different languages — for instance, p’s and f’s often change over time in many languages, as in the change from “pater” in Latin to the more recent term “father” in English.

The researchers could predict what 23 words, including “I,” “ye,” “mother,” “male,” “fire,” “hand” and “to hear” might sound like in an ancestral language dating to 15,000 years ago.

In other words, if modern-day humans could somehow encounter their Stone Age ancestors, they could say one or two very simple statements and make themselves understood, Pagel said.

Limitations of tracing language

Unfortunately, this language technique may have reached its limits in terms of how far back in history it can go.

“It’s going to be very difficult to go much beyond that, even these slowly evolving words are starting to run out of steam,” Pagel told LiveScience.

The study raises the possibility that researchers could combine linguistic data with archaeology and anthropology “to tell the story of human prehistory,” for instance by recreating ancient migrations and contacts between people, said William Croft, a comparative linguist at the University of New Mexico, who was not involved in the study.

“That has been held back because most linguists say you can only go so far back in time,” Croft said. “So this is an intriguing suggestion that you can go further back in time.”

Cracking the Semantic Code: Half a Word’s Meaning Is 3-D Summary of Associated Rewards (Science Daily)

Feb. 13, 2013 — We make choices about pretty much everything, all the time — “Should I go for a walk or grab a coffee?”; “Shall I look at who just came in or continue to watch TV?” — and to do so we need something common as a basis to make the choice.

Half of a word’s meaning is simply a three dimensional summary of the rewards associated with it, according to an analysis of millions of blog entries. (Credit: © vlorzor / Fotolia)

Dr John Fennell and Dr Roland Baddeley of Bristol’s School of Experimental Psychology followed a hunch that the common quantity, often referred to simply as reward, was a representation of what could be gained, together with how risky and uncertain it is. They proposed that these dimensions would be a unique feature of all objects and be part of what those things mean to us.

Over 50 years ago, psychologist Charles Osgood developed an influential method, known as the ‘semantic differential’, that attempts to measure the connotative, emotional meaning of a word or concept. Osgood found that about 50 per cent of the variation in a large number of ratings that people made about words and concepts could be captured using just three summary dimensions: ‘evaluation’ (how nice or good the object is), ‘potency’ (how strong or powerful an object is) and ‘activity’ (whether the object is active, unpredictable or chaotic). So, half of a concept’s meaning is simply a measure of how nice, strong, and active it is. The main problem is that, until now, no one knew why.

Dr Baddeley explained: “Over time, we keep a running tally of all the good and bad things associated with a particular object. Later, when faced with a decision, we can simply choose the option that in the past has been associated with more good things than bad. This dimension of choice sounds very much like the ‘evaluation’ dimension of the semantic differential.”

To test this, the researchers needed to estimate the number of good or bad things happening. At first sight, estimating this across a wide range of contexts and concepts seems impossible; someone would need to be observed throughout his or her lifetime and, for each of a large range of contexts and concepts, the number of times good and bad things happened recorded. Fortunately, a more practical solution is provided by the recent phenomenon of internet blogs, which describe aspects of people’s lives and are also searchable. Sure enough, after analysing millions of blog entries, the researchers found that the evaluation dimension was a very good predictor of whether a particular word was found in blogs describing good situations or bad.

Interestingly, they also found that how frequently a word was used was also a good predictor of how much we like it. This is a well-known effect — the ‘mere exposure effect’ — and a mainstay of the multi-billion dollar advertising industry. When comparing two options we just choose the option we like the most — and we like it because in the past it has been associated with more good things.

Analysing the data showed that ‘potency’ was a very good predictor of the probability of bad situations being associated with a given object: it measured one kind of risk.

Dr Fennell said: “This kind of way of quantifying risk is called ‘value at risk’ in financial circles, and the perils of ignoring it have been plain to see. Russian Roulette may be, on average, associated with positive rewards, but the risks associated with it are not for everyone!”

It is not the only kind of risk, though. In many situations, ‘activity’ — that is, unpredictability, or more importantly uncontrollability — is a highly relevant measure of risk: a knife in the hands of a highly trained sushi chef is probably safe, a knife in the hands of a drunk, erratic stranger is definitely not.

Dr Fennell continued: “Again, this different kind of risk is relevant in financial dealings and is often called volatility. It seems that the mistake that was made in the credit crunch was not ignoring this kind of risk, but to assume that you could perfectly guess it based on how unpredictable it had been in the past.”

Thus, the researchers propose that half of meaning is simply a summary of how rewarding, and importantly, how much of two kinds of risk is associated with an object. Being sensitive not only to rewards, but also to risks, is so important to our survival, that it appears that its representation has become wrapped up in the very nature of the language we use to represent the world.

Journal Reference:

  1. John G. Fennell, Roland J. Baddeley. Reward Is Assessed in Three Dimensions That Correspond to the Semantic DifferentialPLoS ONE, 2013; 8 (2): e55588 DOI:10.1371/journal.pone.0055588

Juridiquês (Sopro 83)


Juridiquês
 Alexandre Nodari


Se tivesse sido possível construir a torre de Babel sem escalá-la até o topo, ela teria sido permitida
(Kafka)

1. Tramita no Congresso Nacional um projeto de lei, de autoria de Maria do Rosário, que pretende acrescer ao artigo 458 do Código de Processo Civil, que diz respeito aos “requisitos essenciais da sentença”, um quarto inciso, tornando obrigatória “a reprodução do dispositivo da sentença em linguagem coloquial, sem a utilização de termos exclusivos da Linguagem técnico-jurídica e acrescida das considerações que a autoridade Judicial entender necessárias, de modo que a prestação jurisdicional possa ser plenamente compreendida por qualquer pessoa do povo”. É evidente que a proposta visa ampliar o acesso à Justiça e tem intenção democratizadora. Todavia, se, por si só, o projeto parece ser razoável, confrontado com a torrente de leis ou projetos de lei que visam regular cada aspecto da vida humana, do cigarro à linguagem (há poucos anos, o comunista-ruralista Aldo Rebelo tentou banir os estrangeirismos do português), não há como não termos uma postura ao menos cética diante dele. Se o projeto em si pode ser bom, contextualizado com a inflação normativa que visa purificar cada aspecto da vida humana, não há como não termos ressalvas. O desejo de limpeza, de higienização, de clareza, atravessa a sociedade como um todo – e tal desejo atende a anseios do poder, ou, pelo menos, é canalizado por ele. Dominique Laporte, em sua História da merda, lembra que foi no mesmo ano de 1539 que a França: 1) primeiro obrigou que as leis, os atos administrativos, os processos judiciais e os documentos notariais, fossem redigidos em vernáculo, eliminando as ambigüidades e incertezas do latim, e possibilitando a “clareza”; 2) e, logo a seguir, proibiu que os cidadãos jogassem na rua seus excrementos – suas fezes e suas urinas. Limpar a linguagem e limpar a cidade: a centralização do poder que daria naquilo que chamamos vulgarmente de absolutismo tem suas raízes nessa vontade de pureza e limpeza, nesse ideal cristalino. Todavia, para além desse “desejo de clareza”, é interessante atentarmos para uma espécie de ato falho contido na “Justificação” do projeto de lei; talvez não seja, de fato, um ato falho, mas algo intencional, o que pouco importa. O parágrafo final da justificativa fala em “tradução para o vernáculo comum do texto técnico da sentença judicial”, como se as sentenças não fossem escritas em português. Há aí uma verdade essencial sobre o Direito: ele é uma linguagem diferente do “vernáculo comum”. Na famosa Apologia de Sócrates, o velho sábio, ao falar diante do tribunal que o acusava de impiedade, diz ser “um estrangeiro à língua” que ali se fala, e pede pra ser tratado como se fosse um estrangeiro que não sabe o grego. O Direito não é uma língua estrangeira como o inglês ou o latim são em relação ao português ou ao grego: o Direito é a língua portuguesa ou grega em outro regime de funcionamento. Diante do Direito pátrio, somos como estrangeiros que não conhecem a própria língua. Mas qual é o regime de funcionamento daquela linguagem que atende, no “vernáculo comum”, pelo nome de “juridiquês”?

2. Em um belíssimo texto sobre a figura do notário, Salvatore Satta, um dos juristas mais brilhantes do século XX, resumiu o “drama” do escrivão ou escrevente, esses mediadores entre os plebeus e os juristas, do seguinte modo: “Conhecer o querer que aquele que quer não conhece”. Não é que “aquele que quer” não conheça o seu querer; “aquele que quer” não sabe traduzi-lo juridicamente. Ou seja, continua Satta, o que o notário faz, de fato, é “reduzir a vontade da parte enquanto vontade do ordenamento”. Eis o sentido do brocardo latino Da mihi factum, dabo tibi jus (“Exponha o fato e te direi o direito”): reduzir a “volição em vista de um escopo prático que a parte se propõe a atingir enquanto vontade jurídica e juridicamente tipificada”, ou seja, traduzir uma vontade, um fato, um ato da vida, em tipos jurídicos. O Direito não lida propriamente com fatos ou atos, mas com fatos ou atos jurídicos, que correspondam a certos tipos previstos. Passar um ato ou fato da vida ao Direito é tipificá-lo. Nesse sentido, o tipo talvez seja o elemento gramatical básico da linguagem jurídica. Mas o que exatamente é um tipo? Quem melhor refletiu sobre a noção de “tipo” não foi um jurista, mas um sociólogo, Max Weber, sedimentando, com os chamados “tipo ideais”, seu método em oposição ao método empírico-comparatista de Durkheim. Para Weber, os tipos puros ou ideais não poderiam ser encontrados “na realidade”; o que existia “de fato” era sempre um compósito, mais ou menos híbrido, de tipos que – e daí a sua natureza circular – se construíam a partir de elementos dispersos nesta mesma “realidade” em que eram aplicados. A própria etimologia de tipo já indica este seu caráter ambíguo, entre a empiria e a abstração: o gregotypos significa imagem, vestígio, rastro, ou seja, ausência, índice de uma presença imemorial. Para usar um exemplo de Vilém Flusser: os “typoi são como vestígios que os pés de um pássaro deixam na área da praia. Então, a palavra significa que esses vestígios podem ser utilizados como modelos para classificação do pássaro mencionado”. As duas formas de Direito que o Ocidente conhece são as duas facetas do tipo: a de matriz romano-gerâmica baseia-se nas leis, na abstração, no tipo, para chegar ao caso empírico; e a Common Law, ao contrário, parte dos casos empíricos para convertê-los em típicos, em abstratos. Mas, como diz Satta, na tipificação, há uma redução, algo se perde – inclusive a linguagem comum.

3. O tipo atende a uma necessidade básica do funcionamento do Direito, e domodus operandi de sua linguagem específica (ou típica): a prescrição. “Se” acontece ou está presente o tipo X, “então” a conseqüência, a sanção, é Y. O problema de todo processo reside em saber se o acontecimento A da vida corresponde ou não ao tipo X para que a conseqüência Y se dê. Como as normas se fundamentam em tipos, que não passam de linguagem sem relação necessária com as coisas e os fatos da vida, é preciso uma construção discursiva que conecte o acontecimento da vida ao tipo jurídico – se o Direito fosse pura subsunção, lembra Giorgio Agamben, poderíamos abdicar desse imenso aparato judicial chamado processo, e que envolve não só o juiz, o advogado e o promotor, mas inúmeros outros mediadores entre a linguagem comum e a linguagem jurídica (o notário, o taquígrafo, etc.). Por isso, para que se dê essa tipificação, não só o fato relevante juridicamente precisa passar à forma de tipo, como também tudo aquilo que o cerca, para que haja a redução da singularidade à tipificação, ou seja, à reprodução daquele caso típico (na forma de jurisprudência). Sabemos bem como isso funciona: dos boletins de ocorrência até as sentenças, os fatos da vida são narrados em uma linguagem que os torna típicos, abstratos – e reprodutíveis. Ítalo Calvino sintetizou de forma magistral esse “inquietante” processo de tradução:


O escrivão está diante da máquina de escrever. O interrogado, sentado em frente a ele, responde às perguntas gaguejando ligeiramente, mas preocupado em dizer, com a maior exatidão possível, tudo o que tem de dizer e nem uma palavra a mais: “De manhã cedo, estava indo ao porão para ligar o aquecedor quando encontrei todos aqueles frascos de vinho atrás da caixa de carvão. Peguei um para tomar no jantar. Não estava sabendo que a casa de bebidas lá em cima havia sido arrombada”. Impassível, o escrivão bate rápido nas teclas sua fiel transcrição: “O abaixo assinado, tendo se dirigido ao porão nas primeiras horas da manhã para dar início ao funcionamento da instalação térmica, declara ter casualmente deparado com boa quantidade de produtos vinícolas, localizados na parte posterior do recepiente destinado ao armazenamento do combustível, e ter efetuado a retirada de um dos referidos artigos com a intenção de consumi-lo durante a refeição vespertina, não estando a par do acontecido arrombamento do estabelecimento comercial sobranceiro.”

Calvino chamou a isso de “terror semântico”, ou “antilíngua”: “a fuga diante de cada vocábulo que tenha por si só um significado” – o perigo, a seu ver, era que essa “antilíngua” invadisse a vida comum. Mas nessa fuga diante do vocábulo que tenha por si só um significado, há um avanço para os vocábulos que abranjam mais de um significado, que podem, portanto, ser reproduzidos em várias situações. Essa reprodutibilidade é, como já sublinhamos, essencial à linguagem baseada em tipos – é ela que diferencia, segundo Flusser, a noção de tipo da noção de caractere, que privilegia aquilo que é característico, isto é, próprio.

4. Portanto, o tipo, como elemento básico da gramática jurídica, serve para tornar reprodutíveis as normas diante da singularidade dos acontecimentos da vida; mas, para tanto, ele abstrai (d)esses acontecimentos. Os processos e as normas, compostos de inúmeros tipos, correm, desse modo, ao largo da vida, como se fossem uma narrativa ficcional. O grande romanista Yan Thomas argumenta que “a ficção é um procedimento que (…) pertence à pragmática do direito”. Os antigos romanos, continua Thomas, não tinham pudor em, diante de uma situação excepcional na qual não queriam fazer uma determinada regra, optar por mudar juridicamente a situação no lugar de alterar a regra. Um exemplo, dentre muitos: buscando tornar válidos os testamentos de alguns cidadãos que haviam morrido quando se encontravam sob custódia dos inimigos, o que, por lei, invalidava tais testamentos, a Lex Cornelia, de 81 a.C., optou por criar uma ficção, da qual conhecemos duas versões: 1) a primeira, uma ficção positiva, era considerar os testamentos como se os cidadãos haveriam morrido sob o estatuto normal da cidadania; 2) e a segunda, uma ficção negativa, pela qual os testamentos eram válidos como se os cidadãos não tivessem morrido sob o poder do inimigo. Por que esse afastamento discursivo da “realidade”, da vida? Por que, na narrativa, ou na sua forma, o Direito se afasta do relato comum, cria uma outra realidade, quase uma dimensão paralela? Aqui entra o segundo elemento da linguagem prescricional que caracteriza o Direito, a sanção, o “então Y”. A função do Direito, como sabemos, é alterar, pela linguagem, pela palavra, a realidade, a vida, ou seja, criar palavras eficazes – nem que para garantir a eficácia de uma lei ou de uma sentença seja preciso usar da força pública. (Aliás, não há vernáculo comum o suficiente capaz de explicar a “qualquer pessoa do povo” que aquela sentença que lhe dá ganho de causa ainda precisa ser executada, em um procedimento que demorará mais alguns anos). É dessa função do Direito de alterar a realidade pela linguagem que nasce a ilusão retrospectiva de que haveria um estágio pré-jurídico em que religião, magia e direito coincidiriam. Na verdade, o que o Direito e a Magia partilham é do mesmomodus operandi da linguagem, o performativo (“eu juro”, “eu te condeno”, “eu prometo”), em que, nas palavras de Agamben, “o significado de uma enunciação (…) coincide com a realidade que é ela mesma produzida pelo ato da enunciação”. Nesse sentido, o Direito é, ainda hoje, mágico. O gosto dos juristas pela linguagem ornamental, pelos brocardos, pela linguagem ritual e pelo eufemismo, provem dessa ligação: a realidade pode ser criada a partir de uma linguagem vazia (ou esvaziada, afastada da realidade). Poderíamos, portanto, dizer que o Direito é, ao mesmo tempo, o saber quase mágico deste modus operandi, e aquilo que garante que tal linguagem performativa se transforme em ato – que os contratos sejam cumpridos, que as leis sejam aplicadas, etc. Todavia, para que o Direito opere magicamente sobre a realidade, ele precisa se afastar dela; para que sua linguagem produza efeitos sobre a vida, ela deve se afastar da linguagem que comunica ou que expressa, o “vernáculo comum”.

5. Portanto, talvez o “juridiquês” não seja (apenas) uma prática judiciária que remonta ao bacharelismo e à pseudo-erudição, um resquício antigo que pode ser removido. Antes, talvez ele seja uma prática judiciária constitutiva daquilo que conhecemos por Direito. Emile Benveniste, ao se deter no fato de que o verbo latino iurare (jurar) é o correspondente ao substantivo ius, que estamos habituados a traduzir por “direito”, argumenta que ius deveria, na verdade, significar “a fórmula da conformidade”: “ius, em geral, é realmente uma fórmula, e não um conceito abstrato”. É interessante notar que Benveniste aponta no ius do direito romano este caráter “mágico” que viemos assinalando, em que há separação da linguagem comum e produção de efeitos sobre a realidade – e mostra ainda que tal caráter estaria presente naquele documento que os juristas costumam considerar uma das pedras basilares do direito ocidental, a Lei das XII Tábuas. Diz Benveniste: “iura é a coleção das sentenças de direito. (…) Essesiura (…) são fórmulas que enunciam uma decisão de autoridade; e sempre que esses termos [ius iura] são tomados em seu sentido estrito, encontramos (…) a noção de textos fixados, de fórmulas estabelecidas, cuja posse é o privilégio de certos indivíduos, certas famílias, certas corporações. O tipo exemplar dessesiura é representado pelo código mais antigo de Roma, a Lei das XII Tábuas, originalmente composta por sentenças formulando o estado de ius e pronunciando: ita ius esto. Aqui é o império da palavra, manifestado por termos de sentido concordante; em latim iu-dex. (…) Não é o fazer, e sim, sempre, opronunciar que é constitutivo do ‘direito’: ius dicereiu-dex nos reconduzem a essa ligação constante. (…) É por intermédio deste ato de fala ius dicere que se desenvolve toda a terminologia da via judiciária: iudex, iudicare, iudicium, iuris-dictio, etc.” Assim, o tipo, a tipificação, é um dos modos pelos quais a linguagem se converte em fórmula. O funcionamento formulário da linguagem no Direito, o afastamento total com a linguagem ordinária, pode ser melhor vista naqueles crimes relacionados justamente à linguagem. Dois exemplos, um da antiguidade e um muito recente podem demonstrar como isso diz respeito à própria lógica do Direito. O primeiro é do famoso orador grego Lísias, que viveu na passagem entre os séculos V e IV a.C. Em seu discurso Contra Theomnestus, Lísias argumenta que a lei contra a calúnia era inócua, na medida em que proibia que se chamasse alguém de “assassino” (androfonon), mas era incapaz de punir aquele que, como Theomnestus, acusava outrem de “matar” (apektonenai) seu pai. O outro caso ocorreu em março de 2010, no Supremo Tribunal Federal. Argumentando contra as cotas, o ex-senador Demóstenes Torres disse que as “negras (escravas) mantinham ‘relações consensuais’ com os brancos (seus patrões)”. Que consensualidade, podemos perguntar, é possível haver entre sujeitos que estão numa relação de senhor e escravo?  Porém, é evidente que nenhum dos 11 magistrados de “reputação ilibada” e “notável saber jurídico” viu racismo aí. Se o argumento tivesse sido enunciado de outra forma (com referência a uma “natural concupiscência” das negras, para dar um exemplo da nefasta tradição racista do Judiciário brasileiro), talvez acarretasse em uma ocorrência jurídica de racismo. Para que algo se inscreva na esfera do Direito, ele precisa se formalizar, ou melhor, se formularizar, se tornar fórmula. Não se trata aqui apenas de inscrição na legislação, em uma lei elaborada pelo Poder Legislativo. O Direito pode existir – e continuar calcado no formalismo – mesmo ali onde não há lei em sentido estrito, o que é provado pelo Direito costumeiro. A formalização é um processo maior do que a lei, e engloba  toda a máquina judiciária, o que inclui juízes, decisões judiciais, advogados, juristas, a chamada “doutrina”, chegando até à sociedade. Trata-se da fixação de conteúdos permitidos ou proibidos em fórmulas, procedimento que, como vimos com os tipos, permite sua reprodução. Esse é o paradoxo do que se costuma chamar, em geral pejorativamente, de “politicamente correto”: ao mesmo tempo que produz avanços materiais inegáveis, está limitado à própria formalidade. Ou seja, as fórmulas – aquilo que (não) se pode fazer ou dizer – repercutem sobre o mundo, modificam o mundo, mas elas não perdem a sua dimensão de fórmulas. Aqueles que defendem o Direito como um mecanismo de transformação social (ou mesmo só como uma ferramenta progressista), mais cedo ou mais tarde esbarram nesse paradoxo: o Direito só garante aquilo que está consubstanciado em fórmulas (e são justamente fórmulas que, por vezes, impedem a transformação social). A partir do momento que se defende o reconhecimento jurídico de certos direitos que o Direito não reconhece, se está defendendo a formalização desses direitos. De fato, a oposição entre direito material e direito formal é inócua: na medida em que a formalização dos direitos é um processo histórico, todo direito formal já foi apenas um direito material, e pode voltar a sê-lo. Ninguém é condenado por emitir discursos de conteúdos racistas (matéria) – só existe o crime de racismo quando este é enunciado de uma certa forma, por uma certa fórmula.

6. Todo jurista conhece a “pirâmide” normativa de Hans Kelsen, em que as normas são ordenadas hierarquicamente (os estratos mais baixos retiram sua validade dos mais altos), e no topo da qual está a “norma fundamental”. O problema, como se sabe, é que essa norma fundamental seria vazia de conteúdo, isto é, pressuposta, imaginária, ficcional (para postular o estatuto da norma fundamental, Kelsen se baseou na Filosofia do como se, de Vaihinger, para o qual até mesmos o discurso científico residia, em última instância, sobre alguma ficção). Ou seja, uma maneira de dar validade ao sistema, de remetê-lo ao Um (ainda que alguns queiram ligá-la ao princípio de que os pactos devem ser cumpridos – pacta sunt servanda –, e outros, muito mais tacanhos, à Constituição). Teríamos, assim, um sistema de normas com conteúdo baseadas numa norma sem conteúdo e fictícia. Talvez, porém, fosse mais produtivo entender o Direito de maneira invertida: um sistema de normas vazias, baseadas numa única norma com conteúdo: o de que a ficção que conhecemos como Direito é verdadeira. No momento histórico atual, poderíamos dizer que tal norma fundamental se cristalizaria em dois princípios: o de que não se pode alegar desconhecimento da lei (fechamento), e o de que o juiz não pode se furtar de decidir uma causa (abertura). Ou seja, o conteúdo da norma fundamental seria o de que o Direito é um sistema, ao mesmo tempo (mas não paradoxalmente), aberto e fechado – o que quer dizer: potencialmente Total. Fechamento e disseminação são conexos no Direito. Para que seja “verdadeiro”, ele não pode assumir seu estatuto de pura linguagem, ou melhor, tem que anulá-lo, dotando toda linguagem de uma potencial “eficácia”. Como as normas e os processos não passam de linguagem sem relação necessária com as coisas, é preciso este princípio que estabelece que alguma relação entre as palavras (normas) e as coisas (fatos) tem que se dar. É desse caráter vazio das normas e dos processos, do seu embasamento na linguagem (e não nas coisas) que deriva a inflação normativa, processo inerente ao Direito. As normas e os processos não passam, no fundo, de fórmulas que se invocam para tentar estabelecer este ou aquele nexo entre as palavras e as coisas – mas todas invocam, como pressuposto, o próprio nome do Direito, isto é, a norma fundamental: a de que a ficção é verdadeira. Portanto, as fórmulas, os tipos, os brocardos, em suma, o juridiquês, são o modo pelo qual se mantém a ficção, e pelo qual a vida, a linguagem comum, é capturada na esfera do Direito, ao mesmo tempo em que é afastada dela.  Nas ficções de Kafka, é comum o confronto, e mesmo o entrelaçamento, entre ficção e direito. O inacabado romance O processo encena bem este confronto e entrelaçamento. Ao início do romance, quando os oficiais da lei vão deter o protagonista K., este imagina se tratar apenas de uma trupe teatral aplicando um trote de aniversário a pedido de amigos. Ao final, quando seus executores chegam para buscá-lo, K. novamente quer acreditar que são apenas de atores encenando e pregando-lhe uma peça. E, de fato, todo o aparato judicial narrado no romance parece ser uma grande ficção: porões obscuros, audiências em cortiços, advogados moribundos. Em nenhum momento aparece a Lei, K. não consegue adentrar a Lei. Em nenhum momento, K. sabe do que está sendo acusado. O romance inteiro é construído sobre a figura dos mediadores – cartorários, advogados, oficiais – que encenam um grandiloqüente e patético processo, uma ficção da qual K. pode a qualquer momento sair. O Direito e o processo são apenas grandes narrativas ficcionais – mas estas encenações, ao contrário das teatrais, tomam vidas. O juridiquês é e não é apenas uma encenação de alguns juristas. É apenas o modo de narrar uma ficção; mas essa ficção atende pelo nome de Direito, que captura e reduz a vida, retirando a sua singularidade e reproduzindo-a como um tipo. Ao “se” da prescrição jurídica, corresponde um “então”. Um “então” que está ausente na verdadeira ficção, que é sempre e apenas um “como se”.

Language and China’s ‘Practical Creativity’ (N.Y.Times)

 

AUGUST 22, 2012

By DIDI KIRSTEN TATLOW

Every language presents challenges — English pronunciation can be idiosyncratic and Russian grammar is fairly complex, for example — but non-alphabetic writing systems like Chinese pose special challenges.

There is the well-known issue that Chinese characters don’t systematically map to sounds, making both learning and remembering difficult, a point I examine in my latest column. If you don’t know a character, you can’t even say it.

Nor does Chinese group individual characters into bigger “words,” even when a character is part of a compound, or multi-character, word. That makes meanings ambiguous, a rich source of humor for Chinese people.

Consider this example from Wu Wenchao, a former interpreter for the United Nations based in Hong Kong. On his blog he has a picture of mobile phones’ being held under a hand dryer. Huh?

The joke is that the Chinese word for hand dryer is composed of three characters, “hong shou ji” (I am using pinyin, a system of Romanization used in China, to “write” the characters in the English alphabet.)

Group them as “hongshou ji” and it means “hand dryer.” Group them as “hong shouji” and it means “dry the mobile phone.” (A shouji is a mobile phone.)

Good fodder for serious linguists and amateur language lovers alike. But does a character script also exert deeper effects on the mind?

William C. Hannas is one of the most provocative writers on this today. He believes character writing systems inhibit a type of deep creativity — but that its effects are not irreversible.

He is at pains to point out that his analysis is not race-based, that people raised in a character-based writing system have a different type of creativity, and that they may flourish when they enter a culture that supports deep creativity, like Western science laboratories.

Still, “The rote learning needed to master Chinese writing breeds a conformist attitude and a focus on means instead of ends. Process rules substance. You spend more time fidgeting with the script than thinking about content,” Mr. Hannas wrote to me in an e-mail.

But Mr. Hannas’s argument is indeed controversial — that learning Chinese lessens deep creativity by furthering practical, but not abstract, thinking, as he wrote in “The Writing on the Wall: How Asian Orthography Curbs Creativity,” published in 2003 and reviewed by The New York Times.

It’s a touchy topic that some academics reject outright and others acknowledge, but are reluctant to discuss, as Emily Eakin wrote in the review.

How does it work?

“Alphabets used in the West foster early skills in analysis and abstract thinking,” wrote Mr. Hannas, emphasizing the views were personal and not those of his employer, the U.S. government.

They do this by making readers do two things: breaking syllables into sound segments and clustering these segments into bigger, abstract, flexible sound units.

Chinese characters don’t do that. “The symbols map to syllables — natural concrete units. No analysis is needed and not much abstraction is involved,” Mr. Hannas wrote.

But radical, “type 2” creativity — deep creativity — depends on being able to match abstract patterns from one domain to another, essentially mapping the skills that alphabets nurture, he continued. “There is nothing comparable in the Sinitic tradition,” he wrote.

Will this inhibit China’s long-term development? Does it mean China won’t “take over the world,” as some are wondering? Not necessarily, Mr. Hannas said.

“You don’t need to be creative to succeed. Success goes to the early adapter and this is where China excels, for two reasons,” he wrote. First, Chinese are good at improving existing models, a different, more practical type of creativity, he wrote, adding that this practicality was noted by the British historian of Chinese science, Joseph Needham.

Yet there is a further step to this argument, and this is where Mr. Hannas’s ideas become explosive.

Partly as a result of these cultural constraints, China has built an “absolutely mind-boggling infrastructure” to get hold of cutting-edge foreign technology — by any means necessary, including large-scale, apparently government-backed, computer hacking, he wrote.

For more on that, see a hard-hitting Bloomberg report, “Hackers Linked to China’s Army seen from E.U to D.C.”

Non-Chinese R.&D. gets “outsourced” from its place of origin, “while China reaps the gain,” Mr. Hannas wrote, adding that many people believed this was “normal business practice.”

“In fact, it’s far from normal. The director of a U.S. intelligence agency has described China’s informal technology acquisition as ‘the greatest transfer of wealth in history,’ which I regard as a polite understatement,” he said.

Mr. Hannas has co-authored a book on this, to appear in the spring. It promises to shake things up. Watch this space.

Irony Seen Through the Eye of MRI (Science Daily)

ScienceDaily (Aug. 3, 2012) — In the cognitive sciences, the capacity to interpret the intentions of others is called “Theory of Mind” (ToM). This faculty is involved in the understanding of language, in particular by bridging the gap between the meaning of the words that make up a statement and the meaning of the statement as a whole.

In recent years, researchers have identified the neural network dedicated to ToM, but no one had yet demonstrated that this set of neurons is specifically activated by the process of understanding of an utterance. This has now been accomplished: a team from L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements.

Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.

In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is saidand what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Within the past 10 years, researchers in cognitive neuroscience have identified a neural network dedicated to ToM that includes specific areas of the brain: the right and left temporal parietal junctions, the medial prefrontal cortex and the precuneus. To identify this network, the researchers relied primarily on non-verbal tasks based on the observation of others’ behavior[1]. Today, researchers at L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) have established, for the first time, the link between this neural network and the processing of implicit meanings.

To identify this link, the team focused their attention on irony. An ironic statement usually means the opposite of what is said. In order to detect irony in a statement, the mechanisms of ToM must be brought into play. In their experiment, the researchers prepared 20 short narratives in two versions, one literal and one ironic. Each story contained a key sentence that, depending on the version, yielded an ironic or literal meaning. For example, in one of the stories an opera singer exclaims after a premiere, “Tonight we gave a superb performance.” Depending on whether the performance was in fact very bad or very good, the statement is or is not ironic.

The team then carried out functional magnetic resonance imaging (fMRI) analyses on 20 participants who were asked to read 18 of the stories, chosen at random, in either their ironic or literal version. The participants were not aware that the test concerned the perception of irony. The researchers had predicted that the participants’ ToM neural networks would show increased activity in reaction to the ironic sentences, and that was precisely what they observed: as each key sentence was read, the network activity was greater when the statement was ironic. This shows that this network is directly involved in the processes of understanding irony, and, more generally, in the comprehension of language.

Next, the L2C2 researchers hope to expand their research on the ToM network in order to determine, for example, whether test participants would be able to perceive irony if this network were artificially inactivated.

Note:

[1] For example, Grèzes, Frith & Passingham (J. Neuroscience, 2004) showed a series of short (3.5 second) films in which actors came into a room and lifted boxes. Some of the actors were instructed to act as though the boxes were heavier (or lighter) than they actually were. Having thus set up deceptive situations, the experimenters asked the participants to determine if they had or had not been deceived by the actors in the films. The films containing feigned actions elicited increased activity in the rTPJ (right temporal parietal junction) compared with those containing unfeigned actions.

Journal Reference:

Nicola Spotorno, Eric Koun, Jérôme Prado, Jean-Baptiste Van Der Henst, Ira A. Noveck. Neural evidence that utterance-processing entails mentalizing: The case of ironyNeuroImage, 2012; 63 (1): 25 DOI:10.1016/j.neuroimage.2012.06.046

It’s Even Less in Your Genes (The New York Review of Books)

MAY 26, 2011
Richard C. Lewontin

The Mirage of a Space Between Nature and Nurture
by Evelyn Fox Keller
Duke University Press, 107 pp., $64.95; $18.95 (paper)

In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.

That tendency, as Evelyn Fox Keller’s new book suggests, is one of the most powerful influences on our scientific understanding. As we change our intent, also we identify anew what is object and what is background. When I glance out the window as I write these lines I notice my neighbor’s car, its size, its shape, its color, and I note that it is parked in a snow bank. My interest then changes to the results of the recent storm and it is the snow that becomes my object of attention with the car relegated to the background of shapes embedded in the snow. What is an object as opposed to background is a mental construct and requires the identification of clear boundaries. As one of my children’s favorite songs reminded them:

You gotta have skin.
All you really need is skin.
Skin’s the thing that if you’ve got it outside,
It helps keep your insides in.
Organisms have skin, but their total environments do not. It is by no means clear how to delineate the effective environment of an organism.

One of the complications is that the effective environment is defined by the life activities of the organism itself. “Fish gotta swim and birds gotta fly,” as we are reminded by yet another popular lyric. Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.

The vulgarization of Darwinism that sees the “struggle for existence” as nothing but the competition for some environmental resource in short supply ignores the large body of evidence about the actual complexity of the relationship between organisms and their resources. First, despite the standard models created by ecologists in which survivorship decreases with increasing population density, the survival of individuals in a population is often greatest not when their “competitors” are at their lowest density but at an intermediate one. That is because organisms are involved not only in the consumption of resources, but in their creation as well. For example, in fruit flies, which live on yeast, the worm-like immature stages of the fly tunnel into rotting fruit, creating more surface on which the yeast can grow, so that, up to a point, the more larvae, the greater the amount of food available. Fruit flies are not only consumers but also farmers.

Second, the presence in close proximity of individual organisms that are genetically different can increase the growth rate of a given type, presumably since they exude growth-promoting substances into the soil. If a rice plant of a particular type is planted so that it is surrounded by rice plants of a different type, it will give a higher yield than if surrounded by its own type. This phenomenon, known for more than a half-century, is the basis of a common practice of mixed-variety rice cultivation in China, and mixed-crop planting has become a method used by practitioners of organic agriculture.

Despite the evidence that organisms do not simply use resources present in the environment but, through their life activities, produce such resources and manufacture their environments, the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials. As a coauthor of a widely used college textbook of genetics,(1) I have had to engage in a constant struggle with my coauthors over the course of thirty years in order to introduce students to the notion that the relative reproductive fitness of organisms with different genetic makeups may be sensitive to their frequency in the population.

But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.

It has been an agony of biology to find boundaries between parts of organisms that are appropriate for an understanding of particular questions. We murder to dissect. The realization of the complex functional interactions and feedbacks that occur between different metabolic pathways has been a slow and difficult process. We do not have simply an “endocrine system” and a “nervous system” and a “circulatory system,” but “neurosecretory” and “neurocirculatory” systems that become the objects of inquiry because of strong forces connecting them. We may indeed stir a flower without troubling a star, but we cannot stir up a hornet’s nest without troubling our hormones. One of the ironies of language is that we use the term “organic” to imply a complex functional feedback and interaction of parts characteristic of living “organisms.” But musical organs, from which the word was adopted, have none of the complex feedback interactions that organisms possess. Indeed the most complex musical organ has multiple keyboards, pedal arrays, and a huge array of stops precisely so that different notes with different timbres can be played simultaneously and independently.

Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities. Fox Keller is one of the most sophisticated and intelligent analysts of the social and psychological forces that operate in intellectual life and, in particular, of the relation of gender in our society both to the creation and acceptance of scientific ideas. The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:

If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.(2)

Beginning with her consciousness of the role of gender in influencing the construction of scientific ideas, she has, over the last twenty-five years, considered how language, models, and metaphors have had a determinative role in the construction of scientific explanation in biology.

A major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual.

It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.

In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes. She gives as a simple example the sound of two different drummers playing at a distance from us. If each drummer plays each drum for us, we should be able to tell the effect of different drummers as opposed to differences between drums. But she admits that is only true if the drummers themselves do not change their ways of playing when they change drums.

Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1).

There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.

There is a third source of variation called the “interaction,” the variation that cannot be accounted for simply by the separate average effects of location and variety. There is no difference that appears between the average of different varieties or average of different locations, suggesting that neither location or variety matters to yield. Yet the yields of corn were different when different particular combinations of variety and location are observed. These effects of particular combinations of factors, not accounted for by the average effects of each factor separately, are thrown into an unanalyzed category called “interaction” with no concrete physical model made explicit.

In real life there will be some difference between the varieties when averaged over locations and some variation between locations when averaged over varieties; but there will also be some interaction variation accounting for the failure of the separately identified main effects to add up to the total variation. In an extreme case, as for example our jungle drummers with a common consciousness of what drums should sound like, it may turn out to be all interaction.

The Mirage of a Space Between Nature and Nurture appears in an era when biological—and specifically, genetic—causation is taken as the preferred explanation for all human physical differences. Although the early and mid-twentieth century was a period of immense popularity of genetic explanations for class and race differences in mental ability and temperament, especially among social scientists, such theories have now virtually disappeared from public view, largely as a result of a considerable effort of biologists to explain the errors of those claims.

The genes for IQ have never been found. Ironically, at the same time that genetics has ceased to be a popular explanation for human intellectual and temperamental differences, genetic theories for the causation of virtually every physical disorder have become the mode. “DNA” has replaced “IQ” as the abbreviation of social import. The announcement in February 2001 that two groups of investigators had sequenced the entire human genome was taken as the beginning of a new era in medicine, an era in which all diseases would be treated and cured by the replacement of faulty DNA. William Haseltine, the chairman of the board of the private company Human Genome Sciences, which participated in the genome project, assured us that “death is a series of preventable diseases.” Immortality, it appeared, was around the corner. For nearly ten years announcements of yet more genetic differences between diseased and healthy individuals were a regular occurrence in the pages of The New York Times and in leading general scientific publications like Science and Nature.

Then, on April 15, 2009, there appeared in The New York Times an article by the influential science reporter and fan of DNA research Nicholas Wade, under the headline “Study of Genes and Diseases at an Impasse.” In the same week the journal Science reported that DNA studies of disease causation had a “relatively low impact.” Both of these articles were instigated by several articles in The New England Journal of Medicine, which had come to the conclusion that the search for genes underlying common causes of mortality had so far yielded virtually nothing useful. The failure to find such genes continues and it seems likely that the search for the genes causing most common diseases will go the way of the search for the genes for IQ.

A major problem in understanding what geneticists have found out about the relation between genes and manifest characteristics of organisms is an overly flexible use of language that creates ambiguities of meaning. In particular, their use of the terms “heritable” and “heritability” is so confusing that an attempt at its clarification occupies the last two chapters of The Mirage of a Space Between Nature and Nurture. When a biological characteristic is said to be “heritable,” it means that it is capable of being transmitted from parents to offspring, just as money may be inherited, although neither is inevitable. In contrast, “heritability” is a statistical concept, the proportion of variation of a characteristic in a population that is attributable to genetic variation among individuals. The implication of “heritability” is that some proportion of the next generation will possess it.

The move from “heritable” to “heritability” is a switch from a qualitative property at the level of an individual to a statistical characterization of a population. Of course, to have a nonzero heritability in a population, a trait must be heritable at the individual level. But it is important to note that even a trait that is perfectly heritable at the individual level might have essentially zero heritability at the population level. If I possess a unique genetic variant that enables me with no effort at all to perform a task that many other people have learned to do only after great effort, then that ability is heritable in me and may possibly be passed on to my children, but it may also be of zero heritability in the population.

One of the problems of exploring an intellectual discipline from the outside is that the importance of certain basic methodological considerations is not always apparent to the observer, considerations that mold the entire intellectual structure that characterizes the field. So, in her first chapter, “Nature and Nurture as Alternatives,” Fox Keller writes that “my concern is with the tendency to think of nature and nurture as separable and hence as comparable, as forces to which relative strength can be assigned.” That concern is entirely appropriate for an external critic, and especially one who, like Fox Keller, comes from theoretical physics rather than experimental biology. Experimental geneticists, however, find environmental effects a serious distraction from the study of genetic and molecular mechanisms that are at the center of their interest, so they do their best to work with cases in which environmental effects are at a minimum or in which those effects can be manipulated at will. If the machine model of organisms that underlies our entire approach to the study of biology is to work for us, we must restrict our objects of study to those in which we can observe and manipulate all the gears and levers.

For much of the history of experimental genetics the chief organism of study was the fruit fly, Drosophila melanogaster, in which very large numbers of different gene mutations with visible effects on the form and behavior of the flies had been discovered. The catalog of these mutations described, in addition to genetic information, a description of the way in which mutant flies differed from normal (“wild type”) and assigned each mutation a “Rank” between 1 and 4. Rank 1 mutations were the most reliable for genetic study because every individual with the mutant genetic type could be easily and reliably recognized by the observer, whereas some proportion of individuals carrying mutations of other ranks could be indistinguishable from normal, depending on the environmental conditions in which they developed. Geneticists, if they could, avoided depending on poorer-rank mutations for their experiments. Only about 20 percent of known mutations were of Rank 1.

With the recent shift from the study of classical genes in controlled breeding experiments to the sequencing of DNA as the standard method of genetic study, the situation has gotten much worse. On the one hand, about 99 percent of the DNA in a cell is of completely unknown functional significance and any two unrelated individuals will differ from each other at large numbers of DNA positions. On the other hand, the attempt to assign the causes of particular diseases and metabolic malfunctions in humans to specific mutations has been a failure, with the exception of a few classical cases like sickle-cell anemia. The study of genes for specific diseases has indeed been of limited value. The reason for that limited value is in the very nature of genetics as a way of studying organisms.

Genetics, from its very beginning, has been a “subtractive” science. That is, it is based on the analysis of the difference between natural or “wild-type” organisms and those with some genetic defect that may interfere in some observable way with regular function. But to carry out such comparison it is necessary that the organisms being studied are, to the extent possible, identical in all other respects, and that the comparison is carried out in an environment that does not, itself, generate atypical responses yet allows the possible effect of the genetic perturbation to be observed. We must face the possibility that such a subtractive approach will never be able to reveal the way in which nature and nurture interact in normal circumstances.

An alternative to the standard subtractive method of genetic perturbations would be a synthetic approach in which living systems would be constructed ab initio from their molecular elements. It is now clear that most of the DNA in an organism is not contained in genes in the usual sense. That is, 98–99 percent of the DNA is not a code for a sequence of amino acids that will be assembled into long chains that will fold up to become the proteins that are essential to the formation of organisms; yet that nongenic DNA is transmitted faithfully from generation to generation just like the genic DNA.

It appears that the sequence of this nongenic DNA, which used to be called “junk-DNA,” is concerned with regulating how often, when, and in which cells the DNA of genes is read in order to produce the long strings of amino acids that will be folded into proteins and which of the many alternative possible foldings will occur. As the understanding and possibility of control of the synthesis of the bits and pieces of living cells become more complete, the temptation to create living systems from elementary bits and pieces will become greater and greater. Molecular biologists, already intoxicated with their ability to manipulate life at its molecular roots, are driven by the ambition to create it. The enterprise of “Synthetic Biology” is already in existence.

In May 2010 the consortium originally created by J. Craig Venter to sequence the human genome gave birth to a new organization, Synthetic Genomics, which announced that it had created an organism by implanting a synthetic genome in a bacterial cell whose own original genome had been removed. The cell then proceeded to carry out the functions of a living organism, including reproduction. One may argue that the hardest work, putting together all the rest of the cell from bits and pieces, is still to be done before it can be said that life has been manufactured, but even Victor Frankenstein started with a dead body. We all know what the consequences of that may be.

1. Anthony J.F. Griffiths, Susan R. Wessler, Sean B. Carroll, and Richard C. Lewontin, Introduction to Genetic Analysis , ninth edition (W.H. Freeman, 2008).

2. The Scientist , Vol. 5, No. 1 (January 7, 1991